Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It has a heartbeat operation and you can message it via messaging apps.

Instead of going to your computer and launching claude code to have it do something, or setting up cron jobs to do things, you can message it from your phone whenever you have an idea and it can set some stuff up in the background or setup a scheduled report on its own, etc.

So it's not that it has to be running and generating tokens 24/7, it's just idling 24/7 any time you want to ping it.

 help



Like what exactly? Can you give example of the type of prompts you are sending for the agent to do?

The messaging part isn’t particularly interesting. I can already access my local LLMs running on my Mac mini from anywhere.


AI companies must hate this right? Because they're selling tokens at a loss?

Google has started banning account that use Antigravy's discounted access instead of paying full price for the API https://github.com/openclaw/openclaw/issues/14203

> Impact: > Users are losing access to their Google accounts permanently > No clear path to account restoration > Affects both personal and work accounts

honestly, this is why I would not trust gemini for anything. I have a lot tied to my gmail, I'm not going to risk that for some random ai that insists on being tied to the same account.


They blocked your entire Gmail/Google account , not just the Gemini access?

That's a recipe for bots to ruin a lot of people's life.


Using different Google accounts won't save you, once Google decide to ban for TOS, all related accounts go with it https://news.ycombinator.com/item?id=30823910

Bold of you to assume profitability is one of their KPIs

My understanding was that if everyone paid and used AI the companies would go into liquidation on energy bills etc

Energy bills wouldn't be the problem if everyone used AI, energy supply would be.

Are you sure? I thought tokens (or watts) were sold at such a loss that if current supply limits were reached they’d go broke

The entire marginal cost to serve AI models is paid for by the API costs of all providers by nearly every estimation. The cost not currently recouped is entirely in the training and net-new infrastructure that they're building.

So why are they banning people from using it in systems like claw?

These companies are generally profitable for inference but it does not cover the cost of R&D (training).

If its profitable why are they banning people from using it in systems like claw?

After a quick search it looks like Google is banning some people who are using Antigravity OAuth with OpenClaw as opposed to paying for API access.

I can't find any instance of an API which charges per-token banning users.


From all indications the big players have healthy margins on inference.

Research and training are the cost sinks.


Is that just because people pay subscriptions and never use their tokens? Same model as ISPs



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: