After I proposed the idea, they said they liked it and were willing to give it a try. They offered 1.) slight changes to the UX, 2.) add a payments feature, or 3.) some back-end bugs that need to be resolved. They asked which of them sounds most appealing as a test.
This was perfect from my perspective, and after I signed the NDA I told them all of those sounded fine. I mentioned that fixing a bug is one of the best ways to get familiar with a new codebase, so I'd probably focus on those, but that I'd like to hear about the rest too. I can probably throw in some of the feature requests or UI tweaks.
That's a bit more candidate-driven than might be expected, but it's really wonderful so far. I feel like I have autonomy, while at the same time they'll get the data about whether I'm effective in the field.
The bias question is important, and I'm not quite sure the best way for us to collectively deal with it. All I know is that there seems to be far more bias in a traditional interview pipeline. One of my recent rejections came after being asked "A hammer and a nail cost $1.10, and the hammer costs a dollar more than the nail. How much does the nail cost?"
Obviously the point of that is to see how you think through a problem. But such problems are useless. They're abstracted from what we as engineers do in the field, and the candidate knows that their entire fate rests on how well they answer that question. Yet those are the kind of metrics real companies are currently using.
> One of my recent rejections came after being asked "A hammer and a nail cost $1.10, and the hammer costs a dollar more than the nail. How much does the nail cost?"
this just got me to see if i can still work through a basic high school algebra problem (yes, slowly, guess and check would've been quicker but less fun).
This was perfect from my perspective, and after I signed the NDA I told them all of those sounded fine. I mentioned that fixing a bug is one of the best ways to get familiar with a new codebase, so I'd probably focus on those, but that I'd like to hear about the rest too. I can probably throw in some of the feature requests or UI tweaks.
That's a bit more candidate-driven than might be expected, but it's really wonderful so far. I feel like I have autonomy, while at the same time they'll get the data about whether I'm effective in the field.
The bias question is important, and I'm not quite sure the best way for us to collectively deal with it. All I know is that there seems to be far more bias in a traditional interview pipeline. One of my recent rejections came after being asked "A hammer and a nail cost $1.10, and the hammer costs a dollar more than the nail. How much does the nail cost?"
Obviously the point of that is to see how you think through a problem. But such problems are useless. They're abstracted from what we as engineers do in the field, and the candidate knows that their entire fate rests on how well they answer that question. Yet those are the kind of metrics real companies are currently using.