Hacker Newsnew | past | comments | ask | show | jobs | submit | allan_s's commentslogin

I've also been using the LLM in Posthog and it has been impressive. I need to check if I can also plug a MCP/Skill to my actual claude code so that I can cross reference the data from my other data source (stripe, local database, access logs etc.) for in depth analysis

This might be up your alley - have Posthog and a ton of other SaaS tools connected so you can run analysis across quant/qualitative data sources: https://dialog.tools

But then we also can't request 50% of CEOs being women or any job with that arguments ? Wouldn't it be more fair to be excused if you actually can prove that you're taking care of familly members regardless of man/woman ? Why would a perfectly able 20 year old woman be excuzed by default based on her sex ?

You are excused from military service if you have to take care of family members, but it’s so rarely done by men compared to women that it is less work to handle the exceptions than changing the law and create more paperwork

thanks!

I find it ironic to have it named "catalan(g)" on a post about spanish law.

Even better. The Catalan word for Catalan is català. So catala-lang.org fits that too.

homelessness is not just a house problem. https://pmc.ncbi.nlm.nih.gov/articles/PMC2605901/

most people don't become homeless because they don't have a house, but because they lost/lack the way to keep one. And not only on the financial plane.


Each knowledge could be signed, and you keep a chain of trust of which author you trust. And author could be trusted based on which friend or source of authority you trust , or conversely that your friend or source of authority has deemed unworthy.


How would my new agent know which existing agents it can trust?

With human Stack Overflow, there is a reasonable assumption that an old account that has written thousands of good comments is reasonably trustworthy, and that few people will try to build trust over multiple years just to engineer a supply-chain attack.

With AI Stack Overflow, a botnet might rapidly build up a web of trust by submitting trivial knowledge units. How would an agent determine whether "rm -rf /" is actually a good way of setting up a development environment (as suggested by hundreds of other agents)?

I'm sure that there are solutions to these questions. I'm not sure whether they would work in practice, and I think that these questions should be answered before making such a platform public.


I think one partial solution could be to actually spin up a remote container with dummy data (that can be easily generated by an LLM) and test the claim. With agents it can be done very quickly. After the claim has been verified it can be published along with the test configuration.


A partial solution sure, but the problem is that you need a 100% complete solution to this problem, otherwise it's still unsafe.


You're using 1000x the resources to prove it than inject the issue, so you now have a denial of business attack.


How in the world is a container 1000x resources? Parent comment is saying try running things in a container.


That's scary - my first thought was that "yes, this one could run inside an organization you already trust". Running it like a public Stackoverflow sounds scary. Maybe as an industry collaboration with trusted members. Maybe.


the same as your browser trust some https domain. A list of "high trust" org that you can bootstrap during startup with a wizard (so that people who don't trust Mozilla can remove mozilla), and then the same as when you ssh on a remote server for the first time "This answer is by AuthorX , vouched by X, Y ,Z that are not in your chain of trust, explore and accept/deny" ?

Economically, the org of trust could be 3rd party that does today pentesting etc. it could be part of their offering. I'm a company I pay them to audit answers in my domain of interest. And then the community benefits from this ?


We can't on one side ask for people to not make judgment based on statistics and on the other side saying that making a shortcut based statistics is valid.


in a lot of sphere, MCP is still the hype. And it was the hype in even more sphere some month ago.

Because of FOMO a lot of higher up decided that "we must do a MCP to show that we're also part of the cool kids" and to give an answer to their even-higher-up about "What are you doing regarding IA ?"

The project has been approved, a lot of time has been sunk into the project, so nobody wants to admit that "hmmm actually now it's irrelevant our existing API + a skill.md is enough"

I've seen that in at least 4 companies my friends work in, so I would be surprised if it's not something like that here too.

On the contrary claude code, in my experience, has been perfectly able to use `stripe` `gh` and to construct on the fly a figma cli (once instructed to do it).


exactly, as a manager and a sometimes a developer, "vibe-coding" has been looking more and more as my day job (in a good way, it's good to not have to do all the dirty work for your pet projects) and it's all about having the same discipline in term of:

* thinking about the big picture * knowing how you can verify that the code match the big picture.

In both case, somtimes you are happily surprised, sometimes you discover that the things you told 3 times the one writing code to do was still not done.


Engineering is not "dirty work."

Management is not "engineering."


It's not what I've written.

To clarify, by "dirty work on my pet project" , I meant ,spending times to fix some compilation issues that after 2 hours you told yourself "damnit, I forgot this!" , or when you want to adapt your old python project from python2 to python3.

And I didn't even talked about managment itself.

But , thinking about the big picture, telling claude code to not use this but that. To not overengineer etc. is engineering in my book, and what I've been been doing for the last 8 years at least with more junior engineers.


Do you view it as an issue at all that when everyone takes on a more manager-like role, no human remains who has the hands-on experience and understanding of the system?


Thats too vague and drastic, every "show HN" is an ads, for notoriety at least. I would prefer we draw the line at "content pushed by a third party against payment must be displaid only with regard to where it is displaid and must not use information about to whom it is displaid" .

I.e displaying an ads about Sentry on a ads technica page, find . Displaying an ads about hiking equipment on ars techbica because i made a google search abd it is estimated I like that -> not fine. It would kill all the incentive to overtrack the ROI will no more justify the cost.


Show HN isn’t advertising in the sense they are addressing: paying a website for space to promote something. There’s no payment taking place with Show HN. If no payment can be made, websites have to find another revenue model besides advertising, and don’t have an incentive to keep users addicted and endlessly consuming.


Nah, advertisement in general. Just make the internet a paid sub. We don't need influencers or snake oil ads. And without ads and influencers, there is no reason for meta to try to keep people infinitely stuck to their phones. They can get their cut just from a paid sub.


Netflix (even before they introduced ads) optimized for watch time. Higher watch time = higher retention for subscriptions (even when prices go up).


Every website would then become a snake oil salesman for buying their subscription.

It'd be like streaming today. Fragmented, expensive, and useless. And no one would like it.

Beyond that, websites would still need people to be addicted to justify the sub.

And furthermore, "sponsorships" will still occur behind the sub wall.


What was the internet like in the early days before monetization? (Hint: I was there and it was great, albeit slow on dial up =]).


Are we wishcasting here or suggesting realistic policy?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: