Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The concern is not so much AI generated news, but malicious actors misleading, influencing or scamming people online at scale with realistic conversations. Today we already have large scale automated scams via email and robo call. Less scalable scams like Tinder love catfish scams or Russian/China trolls on reddit are now run by real people, imagine it being automated. If human moderators cannot distinguish these bots from real humans, that is a scary thought, imagine not being able to tell if this comment was written by a human or robot.


why does this matter? the internet is filled with millions of very low quality human generated discussions right now. There might not be much of a difference between thousands of humans generating comment spam and thousands of gpt-3 instances doing the same


It does matter. The nice feeling of being one of many is a feature of echo chambers. If you can create that artificially for anything with a push of a button, it's a powerful tool to edit discourse or radicalize people.

Have a look at Russias interference in the previous US election. This is what they did, but manually. To be able to scale and automate it is huge.


But careful, the human psyche has some kind of tipping point. Too much fake news, and it will flip. Too less, no real influence is made.

The exact balance must be orchestrated by a human.


> imagine not being able to tell if this comment was written by a human or robot

I think neural nets could help finding fake news and factual mistakes. Then it wouldn't matter who wrote it if it is helpful and true.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: