This is my own proposal, based on my own experience in helping guide a startup through adoption of Claude Code across all roles. Right now, Claude does very little to encourage safe secret handling, which seems like a significant miss. Most engineers know how to mitigate risks of secret exposure (or avoid giving Claude secrets at all via proxies and such), but non-engineers simply don't have that learned skillset.
Of course, none of this prevents Claude from running scripts that read and expose secret values at runtime. However, with LLMs building and testing so much software, this is one proposed piece to help reduce some vectors of exposure.
[Note: my submission was written with assistance from Claude.]
You will probably never actually be able to create actual Claude chats from OpenAI chats, but you could ask Claude to read and distill your old OpenAI chats into Claude chat context. It won’t be the same, but it’s better than nothing, depending on what you’re hoping to get out of it.
Considering they trained their model on open-source software, the least they could do is give it to open-source maintainers for free with no time limit. I’m sure they can come up with other ways to prevent abuse. This 6-months-free move just adds insult to injury, like it’s just a move to extract more from those who involuntarily contributed to the training already. And that’s coming from me, a Claude Code fan.
The double standards are so obnoxious. Corporations bent over backwards to lobby intellectual property into law, then they invent AI and suddenly everything turns into fair use.
> Considering they trained their model on open-source software, the least they could do is give it to open-source maintainers for free with no time limit.
Why? The resulting code generated by Claude is unfit for training, so any work product produced after the start of the subsidized program should be ignored.
Therefore it makes sense to charge them for the service after 6 months, no? Heh.
What do you mean it's unfit for training? It's a form of reinforcement learning; the end result has been selected based on whether it actually solved the need.
You need to be careful of the amount of reinforcement learning vs continued pretraining you do, but they already do plenty of other forms of reinforcement learning, I'm sure they have it dialed in.
That's absolutely what they are. That and other crimes. That's why they're mandatory, by law, in certain industries. That's _precisely_ why we started using them: to prevent the easily preventable.
I suppose this logic stands in the way of a corporation getting what it wants and so it's automatically offensive to the HN "job seeking" crowd; however, even a basic reading of the history shows it's completely true.
This. With so much of my work being done with Claude Code via terminal, I’ve used vim and tmux more than I have in the 20 years since I was first introduced.
With all the buzz about orchestrating in the age of CLI agents there doesn't seem to be much talk about vim + tmux with send-keys (a blessing). You can run as many windows and panes doing so many different things across multiple projects.
The way I see it using tmux to orchestrate multiple agents is an intermediate step until we get a UI that can be a product offering. Assuming we get orchestration to the level it has been touted, there is a world where tmux is unnecessary for the user. You would just type something to one panel in which the "overlord" agent is running (the "mayor" if we talking gas town lingo) and that agent will handle all the rest. I doubt jumping between panes is going to stick around as the product offering evolves.
Of course, none of this prevents Claude from running scripts that read and expose secret values at runtime. However, with LLMs building and testing so much software, this is one proposed piece to help reduce some vectors of exposure.
[Note: my submission was written with assistance from Claude.]
reply