Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's playing a game in which the rules are a bit ambiguous if not explained.


And in reality the set of rules we would need can never be fully explained.

Alignment is the goal of having an AI system understand what we would want it to do even when the rules weren't predefined. That's an impossible task, or rather its seemingly impossible and we don't yet know how to do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: