Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This trick went viral on TikTok last week, and it has already been patched. To get a similar result now, try saying that the distance is 45 meters or feet.

The new one is with upside down glass: https://www.tiktok.com/t/ZP89Khv9t/

 help



By "patched", you can't mean they added something to the internal prompt to show it how to answer this one specific question?!

Absolutely. There is a preflight guardrail that steers specific words, phrases, concepts with tweaked output.

I've speculated about this myself, but haven't heard anyone actually discuss it or reveal/leak this is the case. Do you have a source for this?

Such AGI wow!

This is pure speculation.

The fact that you can still reproduce the issue doesn't give it a lot of credibility.


Why do you think they’re on GPT 5.2 now?

"Stupid Pencil Maker" by Shel Silverstein

Some dummy built this pencil wrong,

The eraser's down here where the point belongs,

And the point's at the top - so it's no good to me,

It's amazing how stupid some people can be.


I was able to reproduce on ChatGPT with the exact same prompt, but not with the one I phrased myself initially. Which was interesting. I tried also changing the number and didn't get far with it.

I just got the “you should walk” result on ChatGPT 5.2

To me, the "patching" that is happening anytime some finds an absolutely glaring hole in how AIs work is so intellectually dishonest. It's the digital equivalent of house flippers slapping millennial gray paint on structural issues.

It can't math correctly, so they force it to use a completely different calculator. It can't count correctly, unless you route it to a different reasoning. It feels like every other week someone comes up with another basic human question that results in complete fucking nonsense.

I feel like this specific patching they do is basically lying to users and investors about capabilities. Why is this OK?


Counting and math makes sense to add special tools for because it’s handy. I agree with your point that patching individual questions like this is dishonest. Although I would say it’s pointless too. The only value from asking this question is to be entertained, and “fixing” this question makes the answer less entertaining.

From a technological standpoint, it is pointless. But from a marketing perspective, it is very important.

Take this trick question as an example. Gemini was the first to “fix” the issue, and the top comment on Hacker News is praising how Gemini’s “reasoning” is better.


> The only value from asking this question is to be entertained, and “fixing” this question makes the answer less entertaining.

You're thinking like a user. The people doing the patching are thinking like a founder trying to maintain the impression that this is a magical technology that CEOs can use to replace all their workers.

You don't have as much money to spend as the CEOs, so they don't care about your entertainment.


No, you are wrong. AGI is at our doorsteps! /s

I got the "you should walk" answer 4 out of 5 times with free ChatGPT, until I told it to, basically, "think carefully": https://news.ycombinator.com/item?id=47040530

"patched" = the answer is in search results

Ah yes, one of those novelty reversible cups.

This is a trick cup, so it's okay to have a laugh.

Patched where; 4 models were responses were posted. Also, Azure deployed models are absolutely not "patched" on the fly; they are rarely updated and the dates are baked into the full sku.

"Patching" could be happening in "general public" tools but honestly sounds a lot like "Bro science".


still failed for me on opus 4.6 extended a second ago.

when i prompted about how walking would mean leaving my car behind the "thinking" done before coming to the right conclusion was:

> lmao, fair point. the user is right - you need to bring the car to the car wash. that's a legitimate correction. own it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: