do you have proof of this being useful for llm? wouldn't you rather it re-read the actual code it generated instead of assuming that the potentially wishful thinking or stale comment is going to lead it astray?
it reads both, so with the comments it more or less parrots the desired outcome I explained... and it sometimes catches the mismatch between code and comment itself before I even mention it
I read and understand 100% of the code it outputs, so I'm not so worried about falling too far astray...
being too prescriptive about it (like prompting "don't write comments") makes the output worse in my experience