Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mr. Beast Deepfake Scam (twitter.com/mrbeast)
28 points by lachlan_gray on Oct 4, 2023 | hide | past | favorite | 20 comments


This will only end when companies serving ads become responsible for their content. Yes, thorough verification of ads "doesn't scale", but cheap scaling is mainly a benefit to the company, not to the consumer.

I can imagine a major cable channel receiving heavy fines if they were to show an ad like that on TV. Why is it different with the ad giants?


Ads should not “scale”. Because every ad represents real income and is directly targeted at human eyes, it is completely feasible to put human eyes on every single ad on a platform. If you have more ads than you have human eyes to look at them, that shows a significant issue. If cost is an issue, cost of advertising should go up. Untrusted advertisers (without history) should be subject to more rigorous evaluation.

Requiring human eyes in the loop for online advertising is the type of thing that should be legislated.


It is funny that our society has the additional threat surface of a guy who really will give you $100,000 if you do This One Weird Trick


There's nothing good to be said about people scamming each other in every technological way possible. Other than that hopefully this gives us a new genre of cyberpunk where high tech and low life are achieved in a dog-eat-dog scammer-against-scammer world, rather than authoritarian oppresive corps. It could be quite cool to see this taken to its logical extreme in fiction.


Societal thrust is finite, after that comes some populism promising to send fraudsters to camp.


They don't even filter regular scams.

I was even served malware on APNews.com (via Google Ads) awhile back...

But why would they work harder to filter them? That costs a lot of money.


We need image sensors that cryptographically sign their output so that images can be attested for authenticity.


I never understood the idea behind that. One can simply record the deepfaked content to have it "cryptographically blessed." That's what people still do to bypass DRM (just recording a screen) and it works so well that you can't even tell it's just a camera recording a screen.

Either way, that is assuming this security holds up and someone doesn't just figure out how to extract the key from the hardware. Have a look at the lengths people go to to hack into video game consoles, these camera vendors have no chance.


This won’t help at all. Literally zero videos or photos you see have come directly bit for bit from a sensor.

Adobe Premiere, After Effects, and Photoshop would like a word.


Why can't I run that signature algorithm on my deepfaked video?


A single piece of hardware would take the image, then sign it. All of the chips for this would need to be integrated together as a secure element. The only way to get the image is from the hardware. Then it would be in the manufacturer's interest to keep the keys a secret. The keys could even be to a specific camera or batch of cameras. You then authenticate the images from the camera like your browser authenticates bytes from a web server. On X, the image gets posted, and then the browser can check the signature of the image against a key from the image sensor manufacturer. The browser displays "this image is authenticated by Sony" if they're the ones that made the image sensor.


Right. Sony holds the private keys; Sony* can make signed deepfakes. And nothing stops us from laundering authenticity by taking a picture of a picture.

* or, like Microsoft showed us, Sony insiders / intruders


NFT ads. Web3 here we go!


If you're looking for a criticism, might I suggest, "won't somebody just print their photoshopped photo and then take a picture of that?". This criticism's semantic consistency would allow for such responses as "this could be remediated by signing additional data points such as time, gyroscope reading, and ...".

What I'm suggesting is about unfortunate centralization, the exact ideological antithesis of the technologies you are mock appreciating. I really wish I knew a better way. There's no distributed proof for truth.


Tying NFTs/Web3 to some future image sensor signing tech wouldn’t be an enhancement. The public sentiment about that space has shifted rapidly while scam after scam is uncovered.

Even if some of the underpinning ideas are useful, they should be reintroduced specifically for these new use cases.

The future of this tech needs to be developed in lock step with the existing web that people actually use. This means eventual uptake by major browsers, and hopefully agreement on open standards.


This will get worse and worse the more the technology gets better and better. It will happen with news and cause mass chaos with false news stories to confuse people.


Would a public/private signature for videos work in solving such fakes? If not, why not?


There are ideas here but I am not seeing a way to make them realistic. Ideally an ad containing any actors would somehow be signed by that actor, but with the amount of different platforms and formats I don't really see a practical solution.

How would the ad get signed and when, and how would it stay valid across placements?


They would not in the same way website signatures do not solve phishing sites, only even worse.

How an authentic video by user mjohm959ga9 showing authentic deepfake of mrbeast help solve this?


because those who fall for it couldn't care less about checking signature




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: