The concern is not so much AI generated news, but malicious actors misleading, influencing or scamming people online at scale with realistic conversations. Today we already have large scale automated scams via email and robo call. Less scalable scams like Tinder love catfish scams or Russian/China trolls on reddit are now run by real people, imagine it being automated. If human moderators cannot distinguish these bots from real humans, that is a scary thought, imagine not being able to tell if this comment was written by a human or robot.
why does this matter? the internet is filled with millions of very low quality human generated discussions right now. There might not be much of a difference between thousands of humans generating comment spam and thousands of gpt-3 instances doing the same
It does matter. The nice feeling of being one of many is a feature of echo chambers. If you can create that artificially for anything with a push of a button, it's a powerful tool to edit discourse or radicalize people.
Have a look at Russias interference in the previous US election. This is what they did, but manually. To be able to scale and automate it is huge.