Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While following OpenClaw, I noticed an unexpected resentment in myself. After some introspection, I realized it’s tied to seeing a project achieve huge success while ignoring security norms many of us struggled to learn the hard way. On one level, it’s selfish discomfort at the feeling of being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”). On another level, it feels genuinely sad that the culture of enforcing security norms - work that has no direct personal reward and that end users will never consciously appreciate, but that only builders can uphold - seems to be on it’s way out
 help



But the security risk wasnt taken by OpenClaw. Releasing vulnerable software that users run on their own machines isn't going to compromise OpenClaw itself. It can still deliver value for it's users while also requiring those same users to handle the insecurity of the software themselves (by either ignoring it or setting up sandboxes, etc to reduce the risk, and then maybe that reduced risk is weighed against the novelty and value of the software that then makes it worth it to the user to setup).

On the other hand, if OpenClaw were structured as a SaaS, this entire project would have burned to the ground the first day it was launched.

So by releasing it as something you needed to run on your own hardware, the security requirement was reduced from essential, to a feature that some users would be happy to live without. If you were developing a competitor, security could be one feature you compete on--and it would increase the number of people willing to run your software and reduce the friction of setting up sandboxes/VMs to run it.


This argument has the same obvious flaws as the anti-mask/anti-vax movement (which unfortunately means there will always be a fringe that don't care). These things are allowed to interact with the outside world, it's not as simple as "users can blow their own system up, it's their responsibility".

I don't need to think hard to speculate on what might go wrong here - will it answer spam emails sincerely? Start cancelling flights for you by accident? Send nuisance emails to notable software developers for their contribution to society[1]? Start opening unsolicited PRs on matplotlib?

[1] https://news.ycombinator.com/item?id=46394867


At least during the Covid response, your concerns over anti-mask and anti-vaccine issues seem unwarranted.

The claims being shared by officials at the time was that anyone vaccinated was immune and couldn't catch it. Claims were similarly made that we needed roughly 60% vaccination rate to reach herd immunity. With that precedent being set it shouldn't matter whether one person chose not to mask up or get the jab, most everyone else could do so to fully protect themselves and those who can't would only be at risk if more than 40% of the population weren't onboard with the masking and vaccination protocols.


> that anyone vaccinated was immune and couldn't catch it.

Those claims disappeared rapidly when it became clear they offered some protection, and reduced severity, but not immunity.

People seem to be taking a lot more “lessons” from COVID than are realistic or beneficial. Nobody could get everything right. There couldn’t possibly be clear “right” answers, because nobody knew for sure how serious the disease could become as it propagated, evolved, and responded to mitigations. Converging on consistent shared viewpoints, coordinating responses, and working through various solutions to a new threat on that scale was just going to be a mess.


Those claims were made after the studies were done over a short duration and specifically only watching for subjects who reported symptoms.

I'm in no way taking a side here on whether anyone should have chosen to get vaccinated or wear masks, only that the information at the time being pushed out from experts doesn't align with an after the fact condemnation of anyone who chose not to.


I specifically wasn't referring to that instance (if anything I'm thinking more of the recent increase in measles outbreaks), I myself don't hold a strong view on COVID vaccinations. The trade-offs, and herd immunity thresholds, are different for different diseases.

Do we know that 0.1% prevalence of "unvaccinated" AI agents won't already be terrible?


Fair enough. I assumed you had Covid in mind with an anti-mask reference. At least in modern history in the US, we have only even considered masks during the Covid response.

I may be out of touch, but I haven't heard about masks for measles, though it does spread through aerosol droplets so that would be a reasonable recommendation.


I think you're right - outside of COVID, it's not fringe, it's an accepted norm.

Personally I at least wish sick people would mask up on planes! Much more efficient than everyone else masking up or risking exposure.


Oh I wish sick people would just not get on a plane. I've cancelled a trip before, the last thing I want to do when sick is deal with the TSA, stand around in an airport, and be stuck in a metal tube with a bunch of other people.

We really needed to have made software engineering into a real, licensed engineering practice over a decade ago. You wanna write code that others will use? You need to be held to a binding set of ethical standards.

Even though it means I probably wouldn't have a job, I think about this a lot and agree that it should. Nowadays suggesting programmers should be highly knowledgeable at what they do will get you called a gatekeeper.

While it is literally gatekeeping, it's necessary. Doctors, architects, lawyers should be gatekept.

I used to work on industrial lifting crane simulation software. People used it to plan out how to perform big lift jobs to make sure they were safe. Literal, "if we fuck this up, people could die" levels of responsibility. All the qualification I had was my BS in CS and two years of experience. It was lucky circumstance that I was actually quiet good at math and physics to be able to discover that there were major errors in the physics model.

Not every programmer is going to encounter issues like that, but also, neither can we predict where things will end up. Not every lawyer is going to be a criminal defense lawyer. Not every doctor is going to be a brain surgeon. Not every architect is going to design skyscrapers. But they all do work that needs to be warranteed in some way.

We're already seeing people getting killed because of AI. Brian in middle management "getting to code again" is not a good enough reason.


> While it is literally gatekeeping, it's necessary. Doctors, architects, lawyers should be gatekept.

That was exactly my point. It's one of those things where deliberately use a word that is technically correct in a context where it doesn't, or shouldn't, hold true. Does this mean I want to stop people from "vibe coding" flappy bird. No, of course not, but as per your original comment yes, there should be stricter regulations when it comes to hiring.


Yeah, I know what you mean. It is a weapon people throw around on social media sites.

You should join the tobacco lobby! Genius!

More straightforwardly, people are generally very forgiving when people make mistakes, and very unforgiving when computers do. Look at how we view a person accidentally killing someone in a traffic accident versus when a robotaxi does it. Having people run it on their own hardware makes them take responsibility for it mentally, so gives a lot of leeway for errors.

I think that’s generally because humans can be held accountable, but automated systems can not. We hold automated systems to a higher standard because there are no consequences for the system if it fails, beyond being shut off. On the other hand, there’s a genuine multitude of ways that a human can be held accountable, from stern admonishment to capital punishment.

I’m a broken record on this topic but it always comes back to liability.


Thats one aspect.

Another aspect is that we have much higher expectations of machines than humans in regards to fault-tolerance.


Traffic accidents are the same symptom of fundamentally different underlying problems among human-driven and algorithmically-driven vehicles. Two very similar people differ more than the two most different robo taxis in any given uniform fleet— if one has some sort of bug or design shortcoming that kills people, they almost certainly all will. That’s why product (including automobile) recalls exist, but we don’t take away everyone’s license when one person gets into an accident. People have enough variance that acting on a whole population because of individual errors doesn’t make sense— even for pretty common errors. The cost/benefit is totally different for mass-produced goods.

Also, when individual drivers accidentally kill somebody in a traffic accident, they’re civilly liable under the same system as entities driving many cars through a collection of algorithms. The entities driving many cars can and should have a much greater exposure to risk, and be held to incomparably higher standards because the risk of getting it wrong is much, much greater.


Oh please, why equate IT BS with cancer? If the null pointer was a billion dollar mistake, then C was a trillion dollar invention.

At this scale of investment countries will have no problem cheapening the value of human life. It's part and parcel of living through another industrial revolution.


Exactly! I was digging into Openclaw codebase for the last 2 weeks and the core ideas are very inspiring.

The main work he has done to enable personal agent is his army of CLIs, like 40 of them.

The harness he used, pi-mono is also a great choice because of its extensibility. I was working on a similar project (1) for the last few months with Claude Code and it’s not really the best fit for personal agent and it’s pretty heavy.

Since I was planning to release my project as a Cloud offering, I worked mainly on sandboxing it, which turned out to be the right choice given OpenClaw is opensource and I can plug its runtime to replace Claude Code.

I decided to release it as opensource because at this point software is free.

1: https://github.com/lobu-ai/lobu


> But the security risk wasnt taken by OpenClaw

This is the genius move at the core of the phenomenon.

While everyone else was busy trying to address safety problems, the OpenClaw project took the opposite approach: They advertised it as dangerous and said only experienced power users should use it. This warning seemingly only made it more enticing to a lot of users.

It’ve been fascinated by how well the project has just dodged and avoided any consequences for the problems it has introduced. When it was revealed that the #1 skill was malware masquerading as a Twitter integration I thought for sure there would be some reporting on the problems. The recent story about an OpenClaw bot publishing hit pieces seemed like another tipping point for journalists covering the story.

Though maybe this inflection point made it the most obvious time to jump off of the hype train and join one of the labs. It takes a while for journalists to sync up and decided to flip to negative coverage of a phenomenon after they cover the rise, but now it appears that the story has changed again before any narratives could build about the problems with OpenClaw.


I am guessing there will be an OpenClaw "competitor" targeting Enterprise within the next 1-2 months. If OpenAI, Anthropic or Gemini are fast and smart about it they could grab some serious ground.

OpenClaw showed what an "AI Personal Assistant" should be capable of. Now it's time to get it in a form-factor businesses can safely use.


With the guard rails up, right? Right?

Love passing off the externalities of security to the user, and then the second order externalities of an LLM that then blackmails people in the wild. Love how we just don’t care anymore.

I don't agree that making your users run the binaries means security isn't your concern. Perhaps it doesn't have to be quite as buttoned down as a commercial product, but you can't release something broken by design and wash your hands of the consequences. Within a few months, someone is going to deploy a large-scale exploit which absolutely ruins OpenClaw users, and the author's new OpenAI job will probably allow him to evade any real accountability for it.

Every single new tech industry thing has to learn security from scratch. It's always been that way. A significant number of people in tech just don't believe that there's anything to learn from history.

And the industry actively pushes graybeards away who have already been there done that.

> being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”).

I don't believe skimming diffs counts as being left behind. Survivor bias etc. Furthermore, people are going to get burned by this (already have been, but seemingly not enough) and a responsible mindset such as yours will be valued again.

Something that still up for grabs is figuring how how to do full agenetic in a responsible way. How do we bring the equivalent of skimming diffs to this?


For my entire career in tech (~20 years) I have been technically good but bad at identifying business trends. I left Shopify right before their stock 4xed during COVID because their technology was stagnating and the culture was toxic. The market didn't care about any of that, I could have hung around and been a millionaire. I've been at 3 early stage startups and the difference between winners and losers was nothing to do with quality or security.

The tech industry hasn't ever been about "building" in a pure sense, and I think we look back at previous generations with an excess of nostalgia. Many superior technologies have lost out because they were less profitable or marketed poorly.


  bad at identifying business trends
I think you’re being unduly harsh on yourself. At least by the Shopify/COVID example. COVID was a black swan event, which may very well have completely changed the fortunes of companies like Shopify when online commerce surged and became vital to the economy. Shortcomings, mismanagement and bad culture can be completely papered over by growth and revenue.

Right place, right time. It’s too bad you missed out on some good fortune, but it’s a helpful reminder of how much of our paths are governed by luck. Thanks for sharing, and wishing you luck in the future.


> seems to be on it’s way out

Change is fraught with chaos. I don't think exuberant trends are indicators of whether we'll still care about secure and high quality software in the long term. My bet is that we will.


i think your self reflection here is commendable. i agree on both counts.

i think the silver lining is that AI seems to be genuinely good at finding security issues and maybe further down the line enough to rely on it somewhat. the middle period we're entering right now is super scary.

we want all the value, security be damned, and have no way to know about issues we're introducing at this breakneck speed.

still i'm hopeful we can figure it out somehow


building this openclaw thing that competes with openai using codex is against the openai terms of service, which say you can't use it to make stuff that competes with them. but they compete with everyone. by giving zero fucks (or just not reading the fine print), bro was rewarded by the dumb rule people for breaking the dumb rules. this happens over and over. there is a lesson here

underrated comment

and this is why they bought Peter. i’m betting he will come to regret it.


I don't know. It's more of a sharp tool like a web browser (also called a "user agent") - yes an inexperienced user can quickly get themselves into trouble without realizing it (in a browser or openclaw), yes the agent means it might even happen without you being there.

A security hole in a browser is an expected invariant not being upheld, like a vulnerability letting a remote attacker control your other programs, but it isn't a bug when a user falls for an online scam. What invariants are expected by anyone of "YOLO hey computer run my life for me thx"?


But in this case following security norms would be a mistake. The right thing to take away is that you shouldn't dogmatically follow norms. Sometimes it's better to just build things if there is very little risk

Nothing actually bad happened in this case and probably never will. Maybe some people have their crypto or identity stolen, but probably not a rate rate significantly higher than background (lots of people are using openclaw)


> Lots of people are using openclaw

https://www.shodan.io/search?query=http.favicon.hash%3A-8055...

Indeed they are, at least 20,432 people :)


So my unsubstantiated conspiracy theory regarding Clawd/Molt/OpenClaw is that the hype was bought, probably by OpenAI. I find it too convenient that not long after the phrase “the AI bubble“ starts coming into common speech we see the emergence of a “viral” use case that all of the paid influencers on the Internet seem to converge on at the same time. At the end of the day piping AI output with tool access into a while loop is not revolutionary. The people who had been experimenting with these type of set ups back when LangChain was the hotness didn’t organically go viral because most people knew that giving a language model unrestricted access to your online presence or bank account is extremely reckless. The “I gave OpenClaw $100 and now I bought my second Lambo. Buy my ebook” stories don’t seem credible.

So don’t feel bad. Everything on the internet is fake.


The modern influencer landscape was such a boon for corporations.

For less than the cost of 1 graphics card you can get enough people going that the rest of them will hop on board for free just to try and ride the wave.

Add a little LLM generated comments that might not throw the product in your face but make sure it is always part of the conversation so someone else can do it for you for free and you are off to the races.


Your introspection missed the obvious point that you just wish you were him. Your resentment had nothing to do with security. It's a self-revelation that you don't actually care about it either and you resent wasting your time.

At the end of the day, he built something people want. That’s what really matters. OpenAI and Anthropic could not build it because of the security issues you point out. But people are using it and there is a need for it. Good on him for recognizing this and giving people what they want. We’re all adults and the users will be responsible for whatever issues they run into because of the lack of security around this project.

Admittedly, I might not be the.. targeted demographic here, but I can't say I understand what problem it solves, but even cursory read immediately flags all the way in which it can go wrong ( including recent 'rent a human hn post'). I am fascinated, and I wonder if that it is partially that fascination that drives current wave of adoption.

I will say openly: I don't get it and I used to argue for crypto use cases.


Well OpenClaw has ~3k open PRs (many touching security) on GitHub right now. Peter's move shows killer product UI/UX, ease of use and user growth trump everything. Now OpenAI with throw their full engineering firepower to squash those flaws in no time.

Making users happy > perfect security day one


"Peter's move shows killer product UI/UX, ease of use and user growth trump everything. "

Erm, is this some groundbreaking revelation?

Its always been that way. Unless its in the context of superior technology with minimal UI a-la Google Search in its early years.


google search did have a killer UI though, you might be forgetting what search looked like before google

A list of results is not a killer UI.

The technology was the killer. Technology providing the right list of results and fast.

OH and believe it or not, this continues to be the core of Google today - they suck at product design and marketing.


It was killer compared to alternatives. All other "homepages" of the internet were the cluttered mess of ads.

I feel like we are arguing semantics though. But IMO any UI that does the job that consumers want well is good UI. Just because it was simple doesn't mean it wasn't good


Security always is the most time consuming in a backend project

This is a normal reaction to unfairness. You see someone who you believe is Doing It Wrong (and I’d agree), and they’re rewarded for it. Meanwhile you Do It Right and your reward isn’t nearly as much. It’s natural to find this upsetting.

Unfortunately, you just have to understand that this happens all over the place, and all you can really do is try to make your corner of the world a little better. We can’t make programmers use good security practices. We can’t make users demand secure software. We can at least try to do a better job with our own work, and educate people on why they should care.


Hey, as a security engineer in AI, I get where you're coming from.

But one thing to remember - our job is to figure out how to enable these amazing usecases while keeping the blast radius as low as possible.

Yes, OpenClaw ignores all security norms, but it's our job to figure out an architecture in which agents like these can have the autonomy they need to act, without harming the business too much.

So I would disagree our work is "on the way out", it's more valuable than ever. I feel blessed to be working in security in this era - there has never been a better time to be in security. Every business needs us to get these things working safely, lest they fall behind.

It's fulfilling work, because we are no longer a cost center. And these businesses are willing to pay - truly life changing money for security engineers in our niche.


Security is always a cost center. We've seen multiple iterations of changes already impact security in the same ways over the last 20+ years. Nothing is different here and the outcomes will be the same: just good enough but always a step behind. The one thing that is a new lever to pull here is time, people need far less of it to make disastrous mistakes. But, ultimately, the game hasn't changed and security budgets will continue to be funneled to off the shelf products that barely work and the remainder of that budget will continue to go to the overworked and underpaid. Nothing really changes.

I think you should give your gut instinct more credit. The tech world has gotten a false sense of security from the big SaaS platforms running everything that make the nitty gritty security details disappear in a seamless user experience, and that includes LLM chatbot providers. Even open source development libraries with exposure to the wild are so heavily scrutinized and well-honed that it’s easy even for people like me that started in the 90s to lose sight of the real risk on the other side that. No more popping up some raw script on an Apache server to do its best against whatever is out there. Vibe coded projects trade a lot of that hard-won stability for the convenience of not having to consider some amount of the implementation details. People that are jumping all over this for anything except sandbox usage either don’t know any better, or forgot what they’ve learned.

Totally agree. And the fact that the author says

> What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.

do no not make me feel all warm and fuzzy: Yeah, changing the world with Tiel's money. Try joining a union instead.


Change the world into what? Techno-feudalism?

Ever since I was four, I've dreamed of doing my part to bring that about.


Very happy to see techno feudalism being mentioned here in HN.

Whatever the origins of the term, it now seems clear it’s kind of the direction things are going.


I recently met a guy that goes to these "San Francisco Freedom Club" parties. Check their website, it's basically just a lot of Capitalism Fans and megawealthies getting drunk somewhere fancy in SF. Anyway, he's an ultra-capitalist and we spent a day at a cafe (co-working event) chatting in a conversation that started with him proposing private roads and shot into orbit when he said "Should we be valuing all humans equally?"

Throughout the conversation he speculated on some truly bizarre possible futures, including an oligarchic takeover by billionaires with private armies following the collapse of the USA under Trump. What weirded me out was how oddly specific he got about all the possible futures he was speculating about that all ended with Thiel, Musk, and friends as feudal lords. Either he thinks about it a lot, or he overhears this kind of thing at the ultracapitalist soirées he's been going to.


So basically a bunch of rich tech edgelords are just doing blow and trying to bring about the world as depicted in Snow Crash?!

Guess I’ll have to get a Samurai sword soon and pivot to high stakes pizza delivery.

There are a disturbing amount of parallels between Elon and L Bob Rife.

It’s really disturbing that we have oligarchs trying to eagerly create a cyberpunk dystopia.


I was really into the idea of kings, knights, castles, princesses etc when I was 4.

I've been feeling this SO much lately, in many ways. In addition to security, just the feeling of spending decades learning to write clean code, valuing having a deep understanding of my codebase and tooling, thorough testing, maintainability, etc, etc. Now the industry is basically telling me "all that expertise is pointless, you should give it up, all that we care about it is a future of endless AI slop that nobody understands".

I've been feeling a similar kind of resentment often. My whole life I have prided myself on being the guy that actually bothers to read the docs and understand how shit works. Seems like the whole industry is basically saying none of that matters, no need to understand anything deeply anymore. Feels bad man.

AI slop will collapse under its own weight without oversight. I really think we will need new frameworks to support AI-generated code. Engineers with high standards will be needed to build and maintain the tools and technologies so that AI-written code can thrive. It's not game over just yet

Thanks, I've been feeling the same way. But it seems like we're some years away from the industry fully realizing it. Makes me want to quit my job and just code my own stuff.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: