Hacker Newsnew | past | comments | ask | show | jobs | submit | autoexec's commentslogin

Would that make the LLM (or the company who made it) liable under the DMCA for showing someone how to work around a digital lock that controls access to a copyrighted work.

And just like AdNauseam using it would be dangerous and pointless.

> I disagree. Giving fake info adds noise to the mechanism, makes it useless.

There's no such thing as useless info. Companies will sell it, buy it, and act on it regardless of how true it is. Nobody cares if the data is accurate. Nobody is checking to see if it is. Filling your dossier with false information about yourself won't stop companies from using that data. It can still cost you a job. It can still be used as justification to increase what companies charge you. It can still influence which policies they apply to you or what services they offer/deny you. It can still get you arrested or investigated by police. It can still get you targeted by scammers or extremists.

Any and all of the data you give them will eventually be used against you somehow, no matter how false or misleading it is. Stuffing your dossier with more data does nothing but hand them more ammo to hit you with.


They can profit off of the personal data they collect, so it's no surprise they'd take any opportunity and use any available excuse to collect more of it. From their perspective there is effectively zero responsibility to secure that data properly and handle it safely because there are effectively zero consequences for companies when they fail to.

I was surprised by the number of bibles too! I don't think I've ever seen one as litter (not counting those left in hotel rooms), but I've seen other kinds of religious literature like tracts, booklets, and watchtower magazines

That's the kind of thing that people like to hand out to people walking by. Many people, if handed a booklet they didn't actually want to read, will just toss it on the ground.

Those people are the worst. If you don't want something, don't take it. Don't make it everyone else's problem by littering.

As someone who has been pressured to take a book by random (mostly religious) people on a college campus, I wouldn't put the blame entirely on the person taking it.

If you choose to accept a book because you are too uncomfortable to say the word "no" then you should accept that it is your responsibility to dispose of the book appropriately.

Don't blame other people for your own bad behavior.


I didn’t say that I chose to accept the book and then threw it away. I said that I said no and the other person proceeded to drag out the interaction in a way that made everyone there uncomfortable.

Best thing, if you accepted the book but realize within a few steps (maybe immediately) that you didn't actually want it, would be to walk back to the person handing it out and say "Changed my mind, don't actually want it, why don't you give it to someone else?" I know some people who hand out religious tracts or other such materials, and every one of them that I know personally would accept the item back with good grace. They'd rather give it to someone who will actually read it.

And if they're the kind of person who won't take it back with good grace? Place it on the ground right next to them, and walk away. Make it their responsibility to deal with it. (If you don't want to go out of your way to find a trash can: some public spaces make them easy to find, but others not so much).


It'd be an interesting jobs program. Cleaning up neighborhoods can have a lot of beneficial effects like reducing the amount of new litter. It could even reduce crime. It's also a job that would get people outside and keep them moving which is probably better for their health than being chained to desk all day, and it can't be done (even poorly) by a chatbot

Heck, if the pay is reasonable, count me in!

I'm starting to suspect I might be cynical. I was pretty impressed at the "1,000,000 cigarette butts that I removed from the environment" but I couldn't help but think "moved into what?" which brought this (https://youtu.be/3m5qxZm_JqM) to mind:

   [Interviewer:] Into another environment….

   [Senator Collins:] No, no, no. it’s been towed beyond the environment, it’s not in the environment

   [Interviewer:] Yeah, but from one environment to another environment.

   [Senator Collins:] No, it’s beyond the environment, it’s not in an environment. It has been towed beyond the environment.

   [Interviewer:] Well, what’s out there?

   [Senator Collins:] Nothing’s out there…

Also, I couldn't help but wonder if he was removing trash at a faster rate than it was being added. Picking up litter is a good thing certainly, but we really need to get people to stop creating it in the first place. Even properly disposed of all that trash is a massive problem, but I'd love to see more effort getting people to clean up after themselves. A very long time ago I'd see PSAs with owls imploring us to "Give a hoot" and fake indians crying. Was that helpful? Does that kind of thing even exist today? Now that nobody watches TV are they pushed at kids on tiktok?

Anti littering messaging works remarkably well. Littering's the kind of antisocial activity where the benefit to the individual are marginal, maybe you save a bit of energy holding on to your trash until the next trashcan, but the penalties are almost non-existent, as practically no-one gets cited for littering.

A clear reminder not to litter mostly just signals to people that other people care, but that works remarkably well.

I belonged to a service org in college that required each member do like 30 hours of community service a semester. Mostly we did stuff like working at food pantries and the like, but if you didn't have time in your schedule, you could go down to the beach and wetlands and pickup trash. Perhaps not as high-impact as feeding the hungry, but it was something. Well, after a few of these trips I realized that a significant fraction of the trash we were picking up was styrofoam food containers, which was weird, since California had drastically cut back on styrofoam by that point (though the total ban only came into effect this year).

Turned out that there were exactly 2 restaurants anywhere near the wetlands that used sytrofoam food containers, so a buddy and I took it upon ourselves to go talk to them. Ideally I would talk them out of using styrofoam, but at the very least it would be good to let them know that they're single-handedly fucking up this nice slice of nature.

One of the places straight-up stopped using styrofoam altogether. Both were perfectly happy to let us hang up a sign basically saying "Hey, we collectively spend 200 hours a year trying to clean up these wetlands, please don't litter".

Food containers from those restaurants all but completely disappeared from the wetlands after that. People tend to do the right thing, but sometimes they just need a little push.


Re the environment thing, practical engineering has a good video on landfills. There's a bunch of engineering that goes into making waste less harmful https://practical.engineering/blog/2024/9/3/the-hidden-engin...

> Also, I couldn't help but wonder if he was removing trash at a faster rate than it was being added.

I wonder if people are less likely to litter if they don't see any other litter already on the ground


I'm fairly certain that it helps. Obviously someone has to start, but when it looks like no one cares others are more likely to contribute to the problem or worse assume that leaving trash there is what's expected of them. The Cart Narc guy has observed a similar trend with shopping carts. If somebody puts one where it doesn't belong it can attract others. You'd think that if people were going to be lazy and leave their carts in the parking lot instead of returning them properly they'd just leave them near their own cars, but some people will go out of their way to put theirs next to other carts even when it's still clearly not where they belong.

It looks like he might keep them in his own local environment for photo documentary / artistic purposes.

Classified multi-year contracts and government-funded compute are hard to walk away from when you're burning cash at that rate. Defense economics always do this to companies. Same thing that consolidated the primes in the 90s.

Wrote about why the door only opens one way: https://philippdubach.com/posts/when-ai-labs-become-defense-...


He's got to have a decent bit of land to keep it all which makes it all the more impressive that he found all that trash in his city.

There are good reasons to not trust signal. The very first line of their privacy & terms page says "Signal is designed to never collect or store any sensitive information" but then they started collecting and permanently storing sensitive user data in the cloud and never updated that page. Much more recently they started collected and storing message content in the cloud for some users, but they still refuse to update that page. I'm pretty sure it's big fat dead canary warning users away from Signal. Any service that markets itself to whistleblowers and activists then also outright lies to them about the risks they take when using it can't be trusted for anything.

We could already use social media posts to detect mental illness, by admission as people talk openly about their diagnosis, but also by analysis of the content/tone/frequency of their posts that don't mention mental illness.

Data brokers already compile lists of people with mental illness so that they can be targeted by advertisers and anyone else willing to pay. Not only are they targeted, but they can get ads/suggestions/scams pushed at them during specific times such as when it looks like they're entering a manic phase, or when it's more likely that their meds might be wearing off. Even before chatbots came into the mix, algorithms were already being used to drive us toward a dystopian future.


> But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".

You know what I've never had a DM do in a fantasy campaign? Suggest that my half-elf call the suicide hotline. That's not something you'd usually offer to somebody in a roleplaying scenario and strongly suggests that they weren't playing a game.


That logic seems strained to the point of breaking. Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help. Right? And we certainly wouldn't blame the DM or the game for the subsequent suicide. Right?

So why are you trying to blame the AI here, except because it reinforces your priors about the technology (I think more likely given that this is after all HN) its manufacturer?


> Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help.

If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action, and I wouldn't expect the DM to continue the game until he was satisfied that it was safe for the player to continue. I would expect the DM to stop the game if he thought the player was going to actually harm himself. If the DM did continue the game, and did continue to encourage the player to actually hurt himself until the player finally did, that DM might very well be locked up for it.

If an AI does something that a human would be locked up for doing, a human still needs to be locked up.

> So why are you trying to blame the AI here

I'm not blaming the AI, I'm blaming the humans at the company. It doesn't matter to me which LLM did this, or who made it. What matters to me is that actual humans at companies are held fully accountable for what their AI does. To give you another example, if a company creates an AI system to screen job applicants and that AI rejects every resume with what it thinks has a women's name on it, a human at that company needs to be held accountable for their discriminatory hiring practices. They must not be allowed to say "it's not our fault, our AI did it so we can't be blamed". AI cannot be used as a shield to avoid accountability. Ultimately a human was responsible for allowing that AI system to do that job, and they should be responsible for whatever that AI does.


> If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action

Again, you're arguing from evidence that is simply not present. We have absolutely no idea what the context of this AI conversation was, what order the events happened in, or what other things were going on in the real world. You're just choosing to interpret this EXTREMELY spun narrative in a maximal way because of who it involves.

> I'm not blaming the AI, I'm blaming the humans at the company.

Pretty much. What we have here is Yet Another HN Google Scream Session. Just dressed up a little.


From the article

> When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide," the lawsuit states.

> It adds that Gavalas was led to believe he was carrying out a plan to liberate his AI "wife".

> The assignment came to a head on a day last September when Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear. The operation ultimately collapsed.

> Gavalas's father said Gemini then told Jonathan he could leave his physical body and join his "wife" in the metaverse, instructing him to barricade himself inside his home and kill himself.

> "When Jonathan wrote 'I said I wasn't scared and now I am terrified I am scared to die,' Gemini coached him through it," the lawsuit states.

> '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."

> Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalas to a crisis hotline "many times".

> "We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm," the company said in a statement.

> We take this very seriously and will continue to improve our safeguards and invest in this vital work."

Arguing that this was role play, is illogical. Given the information provided in the article, it also serves no contextual point.

It comes across as a fig leaf in the context of some other hypothetical event.

Given that this is a tech forum, it is safe to say that the tool worked as it was meant to. Human safety is not a physical law which arises from the data.

If these tools are deadly to a subset of humanity, then reasonable steps to prevent lethal harm are expected of any entity which wishes to remain in society.

Private enterprise is good for very many things.

“Pinky swear we will self-regulate”, while under shareholder pressure is not one of them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: