No it didn't. Like I said... it may have gotten something that worked but there is no way Claude got it to work while supporting multi-spaces, multi-desktops, and using under 2% cpu utilization. My solution can display app window content even when those windows are minimized, which is not something the content server supports.
My point was that Claude realized all the SKC problems and came up with a solution that 99% of macOS devs wouldn't even know existed.
> it may have gotten something that worked but there is no way Claude got it to work while supporting multi-spaces, multi-desktops, and using under 2% cpu utilization.
Maybe, but that's the magic of LLMs - they can now one-shot or few-shot (N<10) you something good enough for a specific user. Like, not supporting multi-desktops is fine if one doesn't use them (and if that changes, few more prompts about this particular issue - now the user actually knows specifically what they need - should close the gap).
Do you believe my brief overview of the problem will help Claude identify the specific undocumented functions required for my solution? Is that how you think data gets fed back into models during training?
Yes. I don't think you appreciate just how much information your comments provide. You just told us (and Claude) what the interesting problems are, and confirmed both the existence of relevant undocumented functions, and that they are the right solution to those problems. What you didn't flag as interesting, and possible challenges you did not mention (such as these APIs being flaky, or restricted to Apple first-party use, or such) is even more telling.
Most hard problems are hard because of huge uncertainty around what's possible and how to get there. It's true for LLMs as much as it is for humans (and for the same reasons). Here, you gave solid answers to both, all but spelling out the solution.
ETA:
> Is that how you think data gets fed back into models during training?
No, one comment chain on a niche site is not enough.
It is, however, how the data gets fed into prompt, whether by user or autonomously (e.g. RAG).
> Yes. I don't think you appreciate just how much information your comments provide
Lol... no. You don't know how I solved the problem and you just read everything that Claude did.
Absolutely nothing in the key part of my solution uses a single public API (and there are thousands). And you think that Claude can just "figure that out" when my HK comments gets fed back in during training?
I sincerely wish we'd see less /r/technology ridiculousness on HN.
I wonder how many 'ideas guys' will now think that with LLMs they can keep their precious to themselves while at the same bragging about them in online fora. Before they needed those pesky programmers negotiating for a slice of the pie, but this time it will be different.
Next up: copyright protection and/or patents on prompts. Mark my words.
I'm pretty sure a large fraction of the vibecoded stuff out there is from the "ideas guys." This time will be different because they'll find out very quickly whether their ideas are worth anything. The term "slop" substantially applies to the ideas themselves.
I don't think there will be copyright or patents on prompts per se, but I do think patents will become a lot more popular. With AI rewriting entire projects and products from scratch, copyright for software is meaningless, so patents are one of the very few moats left. Probably the only moat for the little guys.