Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can pattern match on the prompt (input) then (a) stuff the context with helpful hints to the LLM e.g. "Remember that a car is too heavy for a person to carry" or (b) upgrade to "thinking".
 help



Yes, I’m sure that’s what engineers at Google are doing all day. That, and maintaining the moon landing conspiracy.

If they aren't, they should be (for more effective fraud). Devoting a few of their 200,000 employees to make criticisms of LLMs look wrong seems like an effective use of marketing budget.

It looks like they do. https://simonwillison.net/2025/May/25/claude-4-system-prompt... They patch it in the prompt and they eventually address it in the re-enforcement training. It seems the eventual goal is to patch all of these tiny "glitches" so as to hide the lack of cognition.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: