And in reality the set of rules we would need can never be fully explained.
Alignment is the goal of having an AI system understand what we would want it to do even when the rules weren't predefined. That's an impossible task, or rather its seemingly impossible and we don't yet know how to do it.