Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Although the demos are impressive, they seem short and limited in scope which makes me wonder how well this will work outside of these planned cases. Can it do software architecture at all? Is it still essentially just regurgitating solutions? How often will the solution only be 90% correct, which is 100% not good enough?

Even so, I realize the demos are still broad in scope and the results are incredible. Imagine seeing this even 2 years ago. It would seem like magic; you wouldn't be able to believe it. Today, this was inevitable and entirely believable. There will be even better versions of this soon.



There is a similar product called Sweep AI thst I tried. For extremely simple things "like add a button to the page that prints hello" it was very good. I then asked it to do something more complex, which was to render my d3.js graph vertically rather than horizontally, and it tried to redefine constant variables (it just added a new modified code block without deleting the old one), put function clauses in places that were not synctactic. After I manually fixed those, the functionality just didnt work.


This seems like it's something replit is better suited to execute. You will need human intervention at some point and at that point you're building a full-fledged web IDE


Ah yes, you've entered the first stage of grief. Denial. Next you'll start bargaining, you'll get angry, and you'll become depressed, eventually you'll just accept that AI is taking over software.

In my mind, I've concluded that I have less than 3 years to find an off ramp.


No, I'm past the denial stage (I was certainly there, though... GPT threw me hard and I spent a good 2 months processing what was happening) but I don't, in this specific case, see this agent displacing many jobs yet. Well, not my kind of jobs. I'm already very worried for the entry level of our industry... I'm not sure what it means yet, but I don't think we will have many entrants into software careers within 5 years or so.

I'm looking for an offramp as well. I truly love software so it's a hard reality to contend with at times. Regardless, I'm not a software engineer at my core. I'm a problem solver, and I love creating things. This is transferable. I'm not sure where to yet, but I'm certainly capable of making a move. I hope I have more than 3 years, though.

As someone else asked, I'm curious where you're headed or thinking of heading so far.


I feel the same way.

I think the immediate future is bright. Some will try to cram this technology into enterprise. It will do well. Those jobs will die. Others will leverage it alongside the more creative engineering tasks—they will thrive.

Eventually, what we call software will change from what it is today into something much more accessible to these types of tools. The plateau we've landed on is just a compromise between the technology, economies, and culture of its time.

As this type of tech pervades our everyday lives, much of the widespread need for specialized software will evaporate. The remaining work will be in the corners or the edges of what's possible—highly technical or vertically integrated work (particularly, hardware-integrated stuff), as well as platform engineering to sustain the new paradigm.


> Eventually, what we call software will change from what it is today into something much more accessible to these types of tools.

That's a very good point. There's that tongue-in-check assertion that Java as a language isn't a human-facing but rather a tool-facing programming language. And now I'm wondering, what would a high-level language that is actually designed from the ground up for manipulation by AIs rather than humans look like. And in contrast, I think that the human-facing interface to it might end up much more visual and graph-oriented, possibly similar to Unreal Engine's blueprints.


Good question.

I expect that we're moving into a phase of AIs talking to AIs, and initially it'll be wasteful (because it'll be mostly English), but eventually, they'll derive their own language and seamlessly upgrade protocols when they determine they're talking to an AI. No clue how that will come about or what that language will look like, but honestly, it's kind of exciting.

Really interesting to think about how they might handle context, as well. Even though we have much bigger context windows (and they'll only get larger), context management is still a resource-management issue, which we'll probably continue to refine, as well. Imagine different strategies for managing both what is brought into the context of each request, as well as what form it could take (level of detail, additional references or commentary on it, etc). Things could get really unreadable even in English, and still be very interpretable for an LLM.

W.r.t. the graph-oriented interfaces, are you thinking something like Node-RED [1]? I'm seeing more and more people mention having LLMs produce non-text or structured outputs, like JSON, UI, and other things. Easy to imagine an LLM that wires together various open-source platforms, on-demand. Something like Node-RED for pipelines/functions, some UI tools for visualization/interactivity, other platforms for messaging, etc...

[1] https://nodered.org/


Keep posting about this. I feel the same way.


There is not a limited amount of software engineering that can be done. There's only the amount of software engineering that it is _economical_ to do at any given point. If AI makes software development cheaper and more efficient, people will just use apply it to more use cases. It's never been the case that making programming cheaper has lead to fewer programmers -- quite the opposite.

This change is roughly analogous to the shift from punch cards to compilers. It's just a more "natural language" way to do programming. A lot of the drudgery associated with coding will go away and competent programmers will shift to higher level design work.

Even in a future where AI is better than human software engineers at every single programming task (which I don't believe will be the case any time soon), there is still comparative advantage. AI will not have the _capacity_ to do every single programming task and there's going to still be lots of work for people to do.


The problem is that AI has the potential to turn a professionalized industry into a self-help one. Like, if I can pay $30 a month to have an AI make me several new apps a day, why would I need an app store? It's like fast fashion for programs. Low quality but dirt cheap.


Why I would need an app when most of the time I could just make it in Excel?


But thats OK!

THis is exactly what I want.

I dont want to describe some enterprise architure with DR and blah.

No - I want the little utils that I just want to appear like little magic fairy dust...

Little apps. This is how a new eco system grows.


But if building an app is as easy as voicing a requirement, and getting an updated version is as simple as asking for a change, then why should there be an ecosystem at all, rather than just bespoke on-the-fly apps for every single individual?

As a tangent, I've been thinking about this in regards to the edtech space. For now, the focus is on GenAI tools for quickly building courses, but if we have AI with the combined skills of a Subject Matter Expert, Instructional Designer and Tutor, why would we need to build courses at all, if instead it could just teach anyone anything on the fly, based on their specific background and interests, and adapting immediately to their reactions?

We've already been talking a lot about bubbles / echo chambers over the last decade, and I imagine that this will get a lot worse/better over the coming decade, depending on your perspective of the value of shared experiences.


I think a lot of our plans with AI are skeuomorphic. (Skeuomorphs are when an older, outdated thing is preserved in the updated thing, like how the Save icon is a floppy disk.)

"Generative AI will mean that Pixar movies will be made by AI!" Likelier it means that generative AI will create an entirely new medium beyond movies.


It's the AI hype group which are in a stage of denial.

"Surely if we generate code, it means we can replace developers right?"

Well no, it's not nearly enough. Good code generation with a UI is already a thing since the 90s, it's replacing developers as much as Frontpage and WordPress do.

The average developer writes 100 lines per day anyways so the speed of writing those lines doesn't matter.

Software is an interpersonal job at core and the most precise definition of a business.

The day software engineers are replaced by a machine, there's no job for the CEO itself honestly and that's also why this job is well paid. I'm sure it will be all over LinkedIn though, despite this paradox.

Those new tools will be there in the background but cannot replace a software engineer.


Oh Frontpage. Thanks for that; you’ve made my day.


I've been using GPT-3 since its waitlist was out. Even then you saw demos like this claiming sentence -> full complete project. The demos will get fancier, but reality is much more complex than you think.


It took me much longer than usual to get past the depression phase. The main reason is that everyone online kept telling me I was dumb, too inexperienced or extremely incompetent if I thought AI was a serious threat to me, a junior developer. Friends offline said similar things, just in a nicer way.

Now everyone seems to be a "doomer" like me. I was too stupid to see that people (especially online) were lashing out at me as a coping mechanism. The worst part is that even though I had my "we're fucked" realization was earlier than many others, I didn't really do anything productive, I didn't make the right moves to pivot, I had no plan. Now I have a rough ~3-year off ramp plan, just like everyone else.

I suspect that there are many, many recent grads in my position, who also thought they were taking crazy pills when everyone around them dismissed their concerns


If the circle you're listening to are "recent grads" stop listening to them.

They don't know how the industry works or what it needs, few of them them know how the technology works or what it's realistic near-term prospects are let alone the long-term ones, and none of them have lived long enough to understand the pace of a technology moving from discovery through to maturity and commercialization.

If you look to your experienced seniors instead of your peers, you'll discover two things:

1. They don't respect most contemporary juniors because of the industry boom and are happy to see most of you scare yourselves off

2. They're narrowing in on a familiar, respectful appreciation of recent content generators and their potential for assisting some basic use cases (highly valuable technology!) and are increasingly of the opinion that both the hype and fear of Q1/2023 was largely market manipulation by VC's trying to squeeze another boom out as money was tightening


Well, I did see that, or at least people who claimed to be seniors online, plus some mid-level friends and yes, a junior friend. Admittedly, it's just anecdotes, but it's very difficult to get hard facts, even from perusing bls.gov.

But something changed in the past 4 months or so. I used to see the two things you described, but now I see seniors repeating the same thing juniors (whom they ridiculed) used to say. Look at this very thread, for example. People are starting to realize that even if these AI startups are making ridiculous promises, the tech might be way more than just hype/smoke and mirrors.

I've read quite a few comments in HN, and right here in this thread, that say something along the lines of "I'll be fine... It's the juniors I'm worried about".


Experienced, older people are notorious for downplaying and dismissing the implications of revolutionary new technologies. They may be right this time, or they may be wrong. But age and experience aren't as relevant when it comes to revolutions.


And inexperienced people are notorious for falling for unfounded hype. Eventually they either learn to discern value, which is hard, or just write off hype, which usually works fine but occasionally makes them late to the party.


Many things have been claimed to be the "death" of software careers (.com crash, outsourcing, remote work, etc.) and yet we have more demand than ever.


What kind of work do you have in mind?


In terms of my "off ramp"?

I have a multi-part plan. Immediately, i'm working to get closer to the business. To be closer to the position of defining requirements, not implementing requirements.

Secondary, i'm experimenting with ideas I hope can become a business.

as a final fallback, I have a hobby woodshop in my basement, and I love making furniture.


Making furniture! but we've had machines to do that for 100 years!

Programmers will be fine. The AI plagiarism engines are severely limited, and will be for the forseeable future. Maybe someday i'll be equivalent to the ren-faire blacksmith, but I'm gonna do this until I die.


What about when the ai defines requirements?


one year ago I was thinking about ~10 years and at least safe 5 years. Considering how it's all progressing right now and we had chatGPT just 15 months ago I think you might be right or salaries will get reduced significantly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: