Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to conduct a good programming interview (lihaoyi.com)
251 points by lihaoyi on Sept 10, 2017 | hide | past | favorite | 182 comments


So reading over some of the questions posted on technical programming interviews I must say that I would definitely fail such an interview without studying for it for a month first. Having recently discussed job interviews of ex-colleagues that went to EBay, Facebook, Google, etc.. I wouldn't even pass the whiteboard test.

However I'm shipping scaleable API's used by web and native apps, using Continuous Integration and Deployment on an AWS infrastructure that I set up myself. Furthermore I'm responsible for hiring developers that I work with on agency projects and most important of all getting projects delivered within time, scope and budget AND getting paid for it. Yet I would fail most technical interviews.

I've been doing this for about 10 years and never ran into any major problems. I've worked both as a freelancer and in-house permanent dev and never had any complaints about my work. (Feedback and improvements yes, big failures never). Yet I would fail most technical interviews.

I talk directly to clients and vendors, I sell projects, I take over projects, I end projects and I've created 2 startups, one of which I sold but I would definitely fail a purely technical interview.

I completely understand the need for these technical interviews and weed out obvious bad programmers and get experts on board but sometimes from a business point of view it just doesn't make sense.

At a previous company I worked for the team almost crashed & burned because of the TL's focus purely on technical skills. No code was being shipped but hours of meetings and Slack conversations and discussions on PR's for no good reason about 0.01% improvement gains.

I know I'm not the best developer out there and never will be. Often I'm in awe of the technical skills of people out there but I rather have a less skilled dev with great "people & shipping" skills as I call it than some of the people that would pass all these tests but that I couldn't put in front of a client or even let them email a vendor.


> I completely understand the need for these technical interviews and weed out obvious bad programmers and get experts on board but sometimes from a business point of view it just doesn't make sense.

The thing is that the main goal of the technical interview is not to hire all good candidates. The main goal is to avoid hiring bad candidates. A large high paying company like Facebook has plenty of candidates to choose from, and the main priority is avoiding expensive bad hires. The cost of failing to hire a good engineer is small - they can just take the next guy. The cost of hiring a bad engineer is large.

With this in mind, the process makes a lot of business sense. Someone who is capable of learning what's needed to do well on a technical interview has demonstrated a fairly high level of competence, and is (relatively) unlikely to be a really bad hire.

This filter does limit your applicant pool a little, but if you're Facebook, the largest chunk of your applicant pool are recent comp. sci. graduates. If they can't solve an algorithms problem reasonably gracefully then that's a bit of a red flag. For non-recent-comp-sci-graduates, a competent candidate really should be able to prepare pretty well in a month or two of spare time.

Facebook might miss out on hires like you who would likely be quite good at the job but aren't willing to put in the effort to prep for the interview, but that's a trade off they'll make happily if they minimize bad hires.

Smaller companies with lower pay and access to fewer qualified candidates would probably be ill advised to copy the Facebooks and Googles of the world though. I'll grant that.


>> The cost of failing to hire a good engineer is small

I'm not so sure about that.

https://en.wikipedia.org/wiki/Jan_Koum

"In September 2007 Koum and Acton left Yahoo and took a year off, traveling around South America and playing ultimate frisbee. Both applied, and failed, to work at Facebook."

"On February 9, 2014 Zuckerberg asked Koum to have dinner at his home, and formally proposed Koum a deal to join the Facebook board - 10 days later Facebook announced it was acquiring WhatsApp for US$19 Billion USD."


It seems unlikely to me that as a full time Facebook engineer Koum would have still had time to create a multi-billion dollar startup.

It also seems unlikely to me that he would have been motivated to put in that work for an employer, or that Facebook would have given him the time or freedom necessary to do it.

If he had been hired, he'd have probably been a fine engineer, but there's really no reason to think he'd have added more value than the next guy. The skillset needed to create a successful startup is not the same as the skillset needed to be an effective engineer at a large company.


Are you saying that Facebook employees are not motivated to work? Or that now after he has been hired, he is not adding any value anymore?

Facebook is a multi-billion dollar startup and its created by the people working there. So it may not be my favorite company, but I give respect to that, and the people working there.

I can't say what's the skillset to be an effective engineer at a large company, but I do believe that it must be a SUBSET of the skillset required to create a successful startup.


> Facebook is a multi-billion dollar startup

Where 'startup' is used to mean 'hip place to work', I gather?


But on the other hand if they had been hired, they just would be middle-level managers or second tier VPs and would not have created WhatsApp.


> Someone who is capable of learning what's needed to do well on a technical interview has demonstrated a fairly high level of competence, and is (relatively) unlikely to be a really bad hire.

Sure, as long as their only role is repetitive coding tasks that require limited amounts of broadening their experience beyond the DS&A trivia places like Facebook require.

However then Facebook ends up with stuff like their iOS app ("iOS can't handle our scale"), which had a lot of (probably) very talented CS folks putting together an app that was pure garbage (from an engineering and usability perspective).

The filter limits the applicant pool to that set of people uninterested in solving problems outside the scope of a CS textbook. Capability has little to do with it.


> The filter limits the applicant pool to that set of people uninterested in solving problems outside the scope of a CS textbook.

The only way this would be true is if being good at textbook CS makes you bad at solving other problems. I'm assuming you don't actually think that, 'cause it's kind of crazy.

Even giving the benefit of the doubt and assuming you mean that the filter makes the pool less likely to contain good problem solvers, it's hard to see where this idea is coming from. In my experience, being good at one sort of problem solving makes someone more likely to be good at other sorts of problem solving. The correlation between skill in problem solving across domains is weak sometimes, but it's definitely not negative!


The only way this would be true is if being good at textbook CS makes you bad at solving other problems.

The way the typical interview process works, basically, is "we have a list of 6-12 algorithms we will ask about; if you have memorized implementations of them, and certain key phrases to identify which one to regurgitate on command, you pass the interview".

This is then touted as a test of "CS knowledge" and "problem-solving ability".

Tech interviewing is Goodhart's law in action.


Note I wrote "uninterested" not "incapable." There's a clear anti-engineering bias in software development, and one way that bias is exhibited is by a narrow hyper-focus on the minutiae of DS&A trivia.


> Sure, as long as their only role is repetitive coding tasks that require limited amounts of broadening their experience beyond the DS&A trivia places like Facebook require.

While those "trivia" problems are part of the process for big companies, they are far from the entirety of it. Especially for industry candidates, plenty of other aspects of the interview process focus on broad experience and engineering principles.

I have no reason to believe that CS abilities anti-correlates with broader ability/experience. In fact, I'd posit that performance on algorithm problems correlates well with general intelligence, which correlates well with performance across the job.


Things like the FB app is really a function of the amount of code and developers working on it at once.

Amount of code is a function of the number of engineers. It's a social organization issue & a big company issue more than anything else. If facebook removed the algorithms part of their interview through a time machine, the same issues would of still cropped up.

Also in backend & foundational infrastructure stuff where you go make compilers and base foundational libraries, you do tend to use those algorithmic things once in a while. You use stats in analytics & data science and you math in ML and other initiatives. FB isn't just CRUD construction workers after all :p


A friend just went through the hire process at FB for iOS.

It actually was very practical. According to him it was more or less stuff we do every day, with one generic, programming puzzle of sorts.


Hey, I'm actually about to go through the FB iOS interview process and would really appreciate some insight/help from your friend if they have time. Thanks!


>The cost of failing to hire a good engineer is small - they can just take the next guy. The cost of hiring a bad engineer is large.

This is absolutely and completely backwards. The cost of hiring a bad engineer is small. You can always fire them. The cost of failing to hire a good engineer is enormous. You will never get a chance to hire them again, and probably all of their friends.


It's not easy to fire employees.

Firing employees can scare and stress other employees, create bad blood and criticism on job boards which will make it even harder to hire good employees. Firing might mean you have to still pay the employee for a few months. It might be hard to determine if an employee is a complete failure, which means you might have to give the employee a few months to maybe even a year if you want to accurately determine if they're not performing. It's not cheap at all.


California is an at-will state. Surely tech companies can learn to innovate their HR processes to create a transparent, professional, and optimal solution to remove a poor hire.

And maybe not all poor hires are created equal. In which maybe the cost of training up bad hire would be less expensive than firing them, and the opportunity cost of taking longer time in hiring a better candidate?


Firing is never fast or easy. It usually takes months to realize your mistake since it takes that long to get up to speed on a new job, And then you have to hire someone else, and get them up to speed. As a result, you're delayed by months, and out months of salary.

Also, firing is terrible for morale. You really can't do it that often.

Failing to hire a good engineer is only a big cost if you can't find a decent alternative hire. If you're paying a 6 figure salary with massive benefits like Facebook or Google, you usually have a long queue of qualified candidates to choose from. Maybe the next guy will turn out to be better than the guy you passed on.

> You will never get a chance to hire them again, and probably all of their friends.

I happen to know that Google gets lots of repeat applicants. They have an explicit "wait a year before reapplying" policy.

I really doubt many people who want to work for these companies decide not to because a friend didn't get the job. That's just sort of crazy.


>Also, firing is terrible for morale. You really can't do it that often.

If you hired a bad engineer, not firing them is bad for morale, assuming you have a quality team. They will be pissed at picking up the bad hire's slack and spending so much time helping them.

>If you're paying a 6 figure salary with massive benefits like Facebook or Google, you usually have a long queue of qualified candidates to choose from.

But that's the exception, they have what most companies don't: prestige (rightfully or not). You listed the two companies that actually can get away with this without hurting themselves. Few others can.

See, this article wasn't written for Google and Facebook, it was written for everyday company CIOs, and this is very harmful for them for the reasons I stated. Who wants to spend hours writing calculators during an interview for Initech? Almost nobody. Also, a six figure salary is pretty much the norm for good developers all over the country. The main benefit for working at Facebook and Google is stock options and it looks good on a resume.


maybe, but if you trust your friend and he says the recruitment process sucked are you really going to want to go through it?


It's really about blasting advertising and hiring good enough cheap. The engineers with a network will know not to go through your process and would expect a salary appropriate to the job, the ones that developed a lot of skills on their own with no network (or afraid of honest discussions with peers where they might not be the best) will settle for what they are offered.

When I look at who from my network (or really missing from my network) is at the places with these tactics.. Losing time and failing to get the job is not the worst outcome.


But those big companies constantly complain about there not being enought people. And they also take people who passed hard tests and put them on positions that are way more rote or require different set of skills. And the geniuses then stay because of perks and social program, but talk about actual work as boring.

While there are many low skilled people who look for job, there are also many many companies/teams who can't admit to themselves that owerwhelming majority of positions they have is not that difficult and responsible less then genius could do them.


Problem is, these algorithm based reinventing the wheel exercises don't filter out bad candidates. Being good at these has no relevance in 99% of companies.

We give candidates a take home exercise which is a simplified version of our business domain. We look for clean, readable, well tested OO code.

We had to turn down a CS PhD because their solution didn't meet any of these criteria (no tests, for example).

Were we to do algorithm trivia instead, they would have passed with flying colours and we'd be burdened with unmaintainable code.


The desire for a 'good candidate' and the paranoia about making a mistake is sounding increasingly utopian. There is a certain nervousness and even a hint of hysteria about it.

A bad hire is not the end of the world, and is easily corrected and one can't help feeling there is some magnification going on somewhere about hiring mistakes. There is near constant evaluation and feedback loops in most organizations. A 'bad hire' simply won't survive this.

No ones hires a person not suited to the job in terms of technical requirements, that's a given, but there are other things at play to make the final decision, that are less easily obvious in the interview process so mistakes will be made. Better to make them and move on rather than manufacture this increasing hysterical narrative about the 'perfect hire'.


> The desire for a 'good candidate' and the paranoia about making a mistake is sounding increasingly utopian.

I don't know where you're getting your 'perfect hire narrative' idea from; this is simple cost/benefit analysis. If making a bad hire is a lot more expensive than failing to make a good hire (and I'm pretty sure it is by a factor of at least several times - failing to make a good hire is really cheap), then you choose to make the cheap mistake of not hiring instead of the expensive mistake.

This is about as far from utopianism or looking for the 'perfect hire' as you can get. It's all about finding enough 'good enough' hires, while avoiding expensive mistakes.


What makes a bad hire? Couldn't a 60-70% hire be trained up, rather than expecting all of the candidates to be 90%?


If you aim for 90%, and a few 60-70%s slip through, you can maybe train them up or fire them. If you aim for 60%-70% and end up with a 20-30%, then you have to fire them.


This is a great level headed summary of the whole CS degree vs craftsman argument, thank you.


This leads me to think a good technical interview will identify people who strictly fail all or a large majority of the problems posed.

Someone who you would want to consider is someone who can reason and navigate most of the technical problems put forward - and it would certainly be great to have someone who can solve them all.

I like the previous responder who said people skills/communication and time management are important. Regardless of if a person can solve a problem or not, I'd want them to identify things to discuss or be able to deliver a solution at the end of the allotted time for a question. I want deliverables. You can't move forward without feedback.


This is something which has vexed me for ages. I would fail most contemporary software interviews. I personally don't view myself as a great developer but whenever I make comments like "I would fail most contemporary software interviews" people who have worked with me swear that I must be full of it. Perhaps they're just being nice.

I'm pretty sure I'd fail the current software interview at my company, yet I remain gainfully employed there without issue. It's not like I'm just scraping by either, I do well for myself there.

Luckily I enjoy my current job as the thought of going through these interviews and constantly failing miserably frightens the hell out of me. Part of the problem is that I refuse to study for months just so that I can pass some interviews and then promptly forget all of that stuff I just studied. That's just silly.

I don't know what this means for our industry, and it's not like I have any better ideas, not reasonable ones at least.


This may be like "approach anxiety" for when guys never strike up a conversation with a woman they are interested in because they think all women will reject them. Why not try and go on some interviews? How can you possibly know you will be rejected without even going on one interview? Do you think all companies are the same, and your competition is all prepared for whatever it is you imagine they are doing? You're shadowboxing without ever getting in the ring.


I believe your error here is "without even going on one interview". There are many developers who have proven their skills and abilities on the job, and also have a proven ability to repeatedly fail interviews. This is precisely why the developer interviewing process is such a hot topic here.


There's another aspect that I can speak to personally. My career never really got off the ground. For most companies I'd get through all of the technical phone interviews just fine and after the in-person interviews it was always a rejection. Coupled with my deteriorating financial situation that lead to a lot of depression and massive amounts of self-doubt.

To this day I still doubt my abilities quite heavily. Friends and coworkers say I'm smart and talented and whatever, but quite frankly I don't believe them. These interviews can be soul crushing to people with my type of personality.


are you suggesting that if you have "proven yourself " you should be able to skip the interview process?


The suggestion, which many experienced people also make, is that once you've proven your ability the interview shouldn't focus on that. Spending 5 hours asking about algorithms is a waste of time when you already know someone has the technical chops.


I am suggesting that "Why not try and go on some interviews?" is a silly, and potentially offensive, thing to say in response to someone describing their difficulties with the interview process. Especially when discussing a field where there is considerable ongoing debate about the usefulness and sensibility of various common interview techniques.


I didn't do a technical interview at all for my last few jobs.


As a counter example, I also have about 10 years of experience, but never studied for interviews. On average I pass about 50% of the interviews at companies big and small (ex: passed Google, failed Facebook). I think people often looked at questions and think "there's no way I can do them", but many times you can get some hints and use enough first principles to do well enough.

Individual questions are often bad indications of quality but interviews often consist of 4-5 or even more questions and in aggregate they do say something about the candidate. Some companies do require candidates to pass all the questions and that's a different problem.


I think that if somebody with 10 years of actual and relevant experience fails the interview process, the interview process has a problem.


All that's required to get 10 years of experience is the ability to get your foot in the door and not do so badly that they actually fire you. It is absolutely not a guarantee of any real competence.

At my last job I did a bunch of interviewing, and the very worst candidates we had were engineers with years of experience working at badly run companies. If management is bad enough, a company can't hold on to good engineers. As a result, they end up with just the engineers who can't go elsewhere. Since these companies are filled with bad engineers, there's no culture of learning, and an engineer who isn't really motivated can go years learning basically nothing.

Experience is great, but it's totally possible to go a decade without learning much.


I really really really want everyone who conducts interviews to read this:

http://blog.interviewing.io/you-cant-fix-diversity-in-tech-w...

It's based on a decent sample size, and challenges a lot of assumptions about the consistency and quality of tech interviewing practices. In particular, the person you're talking about, who says they pass only about "50%" of interviews, is telling. Quote from article:

As you can see, roughly 25% of interviewees are consistent in their performance, but the rest are all over the place. And over a third of people with a high mean (>=3) technical performance bombed at least one interview.

This is not a problem of terrible horrible no-good very-bad "fake programmer" candidates who somehow held onto jobs for years. This is a problem of interviewers who are fundamentally incompetent to conduct interviews and evaluate candidates.


Your opinion is based on the assumption that your interview process yielded an accurate assessment of competence.


Well, sure.

We're not talking sophisticated things here though. The exercises that I've formed this opinion from were closer to fizz-buzz than to data structures or algorithms questions.

If it takes you 20 minutes to code something about as hard as fizz-buzz (on a computer, with internet access), and the code you have at the end is terrible, I feel pretty confident saying you're not an effective engineer.


Yet the Google and Facebook interview questions are decidedly not trivial fizzbuzz problems. They're more on the order of the second midterm of a 300-level algorithms class. This is not stuff that most 10-year engineers have in their heads.

I'm not commenting on your company's interview process; I'm merely affirming the earlier poster's claim that there is a serious issue with the Google interview process and its cargo-cult-followed successors (Facebook, et. al.).


It also depends on who you happen to be competing with for the position at the time.

To "fail" an interview might just mean you were their second choice, out of current applicants.


While you can get away with only basic algorithmic skills and I also agree with your points about being able to ship/talk to users/etc I think that having decent algorithms & data structures skills is essential for a developer. For example I have worked on projects where people were not able to ship a product for a year with 10 people and our new 3-man team shipped within 6 months just because we figured out that an algorithm solves the problem. You just can't do well in some contexts when you don't have the ability to figure out whether the problem you are struggling with has a well-known solution. I think it all boils down to having at least a decent ability in all areas (like product management, soft skills, math, etc) involved in order to be able to be productive in all environments.


Counter-point: I am the technical lead on a team of 8. We ship a new data science product (or major enhancement to an existing product) about every 9 months, from inception through putting it in maintenance mode. Our products generate double-digit multiples of my team's costs in revenue and have a reputation for being the easiest to implement and most stable for our customers. We have one CS guy, who is a junior engineer, and he doesn't even have a Master's degree. I can't remember the last time an in-depth discussion about which data structures or algorithms to use actually came up in a design, or even mattered too much in the end.

The reason, as I see it, is simple: the engineering work the Principal Engineer and myself, with our juniors we mentor, do, is far more important than data structures and algorithms trivia. Once it gets to the point of deciding which of these to use the real problems have been solved already. Now, these choices aren't unimportant: the choice of a bad algorithm or data structure might cause problems, but they're choices that are in the main easy to understand and make when compared to the thought that goes in to the rest of the system.


This is not a counter-point. This is what I have said:

> You just can't do well in some contexts

This means that your project is either simple enough that you don't need to know advanced CS or someone on your project actually has that knowledge.

I've also said:

> I think it all boils down to having at least a decent ability in all areas involved in order to be able to be productive in all environments.

So it seems that you are barking up the wrong tree. My point is that you need to have decent knowledge of all related areas to your project in order to be able to meaningfully contribute and this statement still stands. If you write an useful CRUD application you don't have to know how a hashtable works.


You can have decent algorithm skills and not be able to demonstrate that in an interview environment. You get more than 20 minutes in the real world and you're allowed to do research.

Most algorithms that developers will commonly need to use already exist and the skill is not developing algorithms but recognizing which to use and when. Understanding it enough to adapt it to specific needs. That's not the skills interviews test for.


I did not say that you have to implement it. You just have to know which tool to use. And in most cases you only need a `Map` or a `TreeMap` or stuff like that.


There's bad developers who know algorithms and there's bad developers who don't. But I'd say that there's more of one than the other. So companies who haven't figured out a better way to weed out candidates still insist on knowing them to write on their pretty whiteboards.

I myself have already come to accept that we're all idiots and one mere human can only master a very small set of skills before already moving into retirement.


Many large organizations want their engineers to have the algorithmic ability and their technical project managers and engineering managers to have the strengths you describe. Technical leads who possess both bridge the gap.


I've been a developer for quite a while now, worked on all sorts of projects, including quite a few with odd / bespoke requirements.

And I am terrible at technical tests in interviews.

I can do the job, but unless I was to prepare for about a month I would fail, and there is no point since I can use my time for myself and still get a job somewhere else.


Depending on the domain, a lack of ability on either side of the soft/hard skills line can lead to missed opportunities to deliver a great product. If you don't have at least one or two skilled implementors on your team (or if those people are marginalized by a culture that isn't able to recognize good technical decisions), you'll be severely limited in what you're able to build, and what you can build will probably suffer in quality.

There's a difference between practical technical excellence and over-the-top bikeshedding/obsession with needless optimizations.


> No code was being shipped but hours of meetings and Slack conversations and discussions on PR's for no good reason about 0.01% improvement gains.

That may be where it makes sense to do a custom clock showing on the wall counting not minutes but the cumulative cost of the meeting. For simplicity assume everyone is making $1/minute, so a 5 person half hour meeting costs $150 just for the people in the meeting without even factoring in whether it's delaying the project.

Also, "perfection is the enemy of good" (or complete).


Unfortunately, ironically, that sounds like a clock that would take a long time to make.


I'm willing to cheat.

A search on Google Play Store for "meeting cost clock" comes up with several, though the best looking from the screenshots appears to be the $1.99 Meeting Cents with almost no downloads and no updates since 2014. Several others are ad supported or low cost. Frankly it'd cost more in (billable?) time to test and select one than to buy all of them.

A very simple browser based implementation is at https://www.mcgurrin.info/clock.htm and another is at https://tobytripp.github.io/meeting-ticker/


mins = 0; while(true) { wait(60); clear; print numPeople*(mins++);}. Run on a terminal hooked up to a big screen. Not exact, but it will get the point across.


It's good to think carefully about how you interview candidates. Most of the ideas in this post are good. You should definitely let candidates type answers on computers. You should definitely work to put them at ease, in an environment that approximates their actual working environment.

But really, this is window dressing. The problem is that technical interviews just don't work reliably. We have better ways to qualify candidates.

Rather than trying to read subjective, noisy signals from in-person interviews, give candidates a small battery of work-sample tests. These can be bug-fixes, small features, diagnostic challenges, or even specification writing assignments.

A properly designed work-sample challenge has a rubric: a set of instructions for scoring a result. Ideally, the rubric is straightforward enough for someone who hasn't met the candidate to quickly score their result. "Runs correctly". "Passes a battery of unit tests". "Includes appropriate documentation". Whatever is important for your team.

Work to give every candidate the same set of challenges, so you can track results over time. You'll find that after refining and iterating for a few months, you'll rapidly build confidence in the tests, to the point where in-person interviews become superfluous for technical qualification.

Working on technical qualifiers is stressful. Most candidates will perform more poorly on them than they would on normal work. You should build that into your rubric (though, really, data from candidates will ultimately determine how strict the rubric becomes).

Understanding how stressful qualifying work is, you should consider not requiring candidates to work on problems on-site. Instead, administer them online. Suggest the amount of time you expect the problems to take to finish, and offset that amount of time from the amount you demand on-site.

When you must interview, you should work hard to standardize the interview. Interviewers hate this. Everyone has their favorite questions. People want interviews to be conversational and adaptive. Interviewing isn't supposed to be fun for the interviewer; it's supposed to be reliable.


This is my approach for evaluating new candidates as a first step. For front-end, for instance, I start with a codepen that has the basic skeleton that includes the libraries we use internally, has a few html elements styled and placed, and the required javascript boilerplate.

Then I provide 10 tasks that are written much in the way the company would put tasks in asana or git or jira. Each task builds on the previous and gets more complex. In the boilerplate they start with, there is one obvious error and one subtle one. Each step, I encourage them to write notes on how they solved it, and any questions or concerns they might have. I make myself available via chat or email if they have any clarifying questions.

They then can do it in their own time, at home. They can google, hit up documentation, stackoverflow, etc., because that's exactly how developers work. Nobody's over their shoulder, there's no arbitrary limitations, just build us some good UI code.

It's true that I've been amazed at how many candidates were stopped in their tracks at the simplest things that are well within the domain of their resume. But when I get a candidate that gets through them, hits the requirements and has clean code as a result, I've hardly ever seen them fail at a later step where we might evaluate more complex problems like working around GC slowdowns, obscure browser compatibility, first meaningful paint performance, etc.

And most of all, I feel confident that I haven't tried to "trick" anyone by having them do long-solved algorithms offline on a white board.


A huge advantage of this approach is how much more confident you can be that you've qualified candidates technically. You can --- and should --- exploit that confidence commercially. Change the posture of your outreach to reassure people that you're interested in applicants regardless of their background or resume. Change where you run job ads and how you word them. If you have a strong qualification process, your job ads no longer need to do any heavy lifting to get candidates to screen themselves; after all: your screening process is going to reliably distill your candidate stream down to a technically qualified subset.

At Matasano, we went from hiring almost exclusively people with a background in our field to virtually never hiring people with a background in our field in a matter of about a year --- and the quality of our new hires went way up.


This is some pretty outstanding insight. I work with middle school kids to get them excited about computers and programming and it's amazing how many of the ones that "get it" are not the ones you would pick. Frankly, I think they are surprised at how much they like it and you can almost see the light bulb go on. If we could reach those type of people and get them to interview it would almost certainly change the industry for the better.


> Rather than trying to read subjective, noisy signals from in-person interviews, give candidates a small battery of work-sample tests. These can be bug-fixes, small features, diagnostic challenges, or even specification writing assignments.

I agree that work sample interviews are a huge step in the right direction.

Recently though, I've been thinking about taking things even further: instead of giving sample projects, why not give candidates actual work that you need done, and pay them for it, like you would a contractor? Sample projects are inevitably going to be somewhat contrived, and they'll never be evaluated in quite the same way actual code submitted in a PR will be.

This approach lets the candidate and the company "date" for a while, letting each see how the other one actually works, before they decide to tie the knot of full-time employment. I think it's the most natural way for professionals to get to know each other, build trust, and develop well-informed opinions.

Of course there are plenty of practical logistical challenges, but it does seem like a freelance-to-full-time process could lead to better outcomes all around.


There are all sorts of problems with this approach.

Contract software development --- relatively short projects with no up-front commitment to long-term work --- is, when sold and delivered competently, much more expensive than salaried software development. That's why people hire full-time.

The very best software developers know this. They're not as a rule especially good negotiators, but they're familiar with the market. For every firm offering them temp-to-perm work, there are 4 others competing over who can offer the most attractive immediate full-time offer.

That makes the "standard" recruiting pipeline for temp-to-perm companies an adverse selection problem. And, I believe, it's also often an exploitative one. People taking these temp-to-perm contracts aren't getting paid 2-3x their salaried rate to work on temp terms. But that's the lower side of what contractors demand to accept the kind of risk we're talking about with these projects.

Really, companies that do temp-to-perm are telling the market, "we're so bad at qualifying candidates that we're going to fob the entire problem off on our candidates, and hope there are enough competent people with such poor negotiating skill in the labor market that we can get away with that indefinitely". For now, they're probably right --- although taking this pathway is, I think, doing damage to the long-term integrity of their team and processes. But I wouldn't bet on it.


You make a lot of good points. Temp-to-perm could easily turn into an adverse selection situation, and there is potential for it to get exploitive.

That said, I do think there is appeal for candidates in contracting for companies as a way to get to know the teams a bit better before deciding where to take a full-time offer. At least it's something I've been glad I've done at times, and wished I could have done at other times.

Plus, most developers are not the very best developers, and they won't have four competing offers after only a couple of weeks of job searching. For an awful lot of developers they'll need to spend hours prepping for interviews, working on take-home projects, going to onsites, and talking on the phone with hiring managers and recruiters. They spend a lot of time and effort for which they're not getting paid anything, and which is largely orthogonal to their actual work. When they do get a few offers they have to make a decision with limited knowledge of what working at each company will actually be like.

I dunno. I still think it would be awesome if job-seekers and hiring teams could get to know each other in some more natural and meaningful way than the feats of strength performed in today's interviews. Maybe temp-to-perm is too problematic though.


For my first job-search out of school, I actually started off by accepting a temporary, fully-remote contract position for one of the first startups I was interviewing with at the time.

After a short 1 hour pairing session with their front-end lead, I was offered a choice between going through the full 1 day interview gauntlet onsite at SF to arrive at a full-time offer on the spot, or a 2 month remote contract to give us both some more time to decide if we want to proceed with working together full-time (the actual length of the contract was negotiable, I just went with 2 months because I felt that's about how long it'd take me to finish interviewing with all the companies I was interested in). They made it clear that I could continue to interview at other companies during that time frame, as long as it wasn't so disruptive that it impacted my quality of work, and that I should feel free to turn down the full-time offer at the end of the contract if I find a better offer elsewhere. I felt the latter option suited me better at the time, because it was still fairly early in my interview process, so I didn't have any offers coming up on the immediate horizon, and I was still definitely interested in exploring my options some more before deciding on my first full-time job. So I accepted it.

I had a blast working there for those 2 months, learned a lot, and among other things, completed a decidedly non-trivial addition to their app. At the end of the contract, they offered me a very generous full-time position, but I ended up taking a competing offer at my current employer instead. There was no hard feelings, and we parted on good terms.

I guess what I'm trying to say is, yes, temp-to-perm definitely isn't right for every candidate (or most candidates, for that matter), but it will give both you and the candidate much more confidence in the decision on whether or not to go full time than any interview process ever could, and there's nothing wrong with offering it as an alternative to the standard interview gauntlet and letting the candidate decide what's best for them.

You definitely have to come to terms with the fact that it's a 2-way street though, meaning you have to accept the risk that the candidate might find at the end of the contract period that it's no longer in their best interest to proceed with a full-time position at your company, and that they are free to act on that realization. Otherwise, if you try to place limits on that aspect of the arrangement, the only candidates who would likely accept it would be the truly desperate ones whom you were probably never interested in hiring in the first place. I for one certainly wouldn't have accepted the arrangement if I wasn't free to continue interviewing during the contract and free to turn down the full time offer at the end of it.


>>Rather than trying to read subjective, noisy signals from in-person interviews, give candidates a small battery of work-sample tests.

I tell people the same thing, but the typical response is "that just discriminates against people who have families and other obligations and can't set aside 10-20 hours on evenings and weekends".


A home test must be doable in 1-2 hours. Not 2 days.


Mostly right. The better way to think of it is that you have a bucket of hours from which to draw phone screens, in-person interviews, and challenges. If you do no challenges today, you shouldn't expand the bucket of hours to accommodate them, but rather take offsets from other activities.

Phone screens are particularly useless and a good place to scavenge hours from.


Other than the pain of being rejected (or rather the lack of validation), where does this reliability problem manifest?

Elite tech companies have no problem with the current implementation of technical interviews.

Engineers whos skills range from Great to Mediocre don't have a problem finding a job.

Software quality has generally improved or stayed the same.


This is demonstrably false. Lots of great engineers are terrible at interviewing. If you ask around, you can get even some very high-profile devs to tell you stories about why they're not working at Google or Facebook or Amazon because of how badly their interviews went. Equally importantly: lots of terrible engineers are great at interviewing, and most people in the industry have had the experience of working alongside someone who is failing up, 2-year-stints at a time.

What's true is that Google and Facebook have such enormous budgets and such huge positive profiles in the talent market that they don't need to work hard to put together functioning teams. They're paying more than they need to and having to deal with more bad hires than they should, but it's for them a marginal problem.


> lots of terrible engineers are great at interviewing and most people in the industry have had the experience of working alongside someone who is failing up, 2-year-stints at a time.

You mean lots of companies are terrible at firing terrible engineers.

The argument is basically the current implementation leads to false negatives. What impacts does that have?

Presumably those very high-profile devs aren't lacking employment opportunities, even if it isn't at the elite tech companies.

Companies, from Elites down to pleb-tier, while perhaps pressed for resources, are still able to execute.

So why is the problem of false negatives a problem we should focus on?


Almost every company is terrible at firing ineffective engineers. Even some of the companies that are famous for doing so ("'meets expectations' earns a generous severance package!") don't necessarily manage to accomplish that in less than a year. This stuff sounds very simple on message boards, but is much harder to do in real life.

The first N months of a developer's tenure at a job is difficult to measure because of ramp-up and calibration (even good developers might hop to different teams after getting hired based on work style fit issues). Whole projects can flounder and need rescuing so that attention to ineffectiveness gets deflected to group leads. Most good development shops will try to make things work for a new hire that isn't performing at the 6 month mark, and that process takes a few months.

Once the decision is made to fire, most big companies will PIP rather than fire. Sure, once you've hit the 7-8 month mark of poor performance, your severance from the firm may be a fait accompli. But the ultimate process to make that happen can easily be made to take more than 12 months.

Once you have a year-long stint at a company on your resume, you can bank that "experience" on your resume.

And that assumes a relatively well-run companies. Most companies aren't well run in this regard, and you can be a terrible developer but last 2+ years in most jobs. If you can reliably put more 2+ year jobs on your resume than 1-year jobs, you will end up with what will appear to most companies to be an extremely attractive resume.

Remember, many management-track developers only write code for ~8-10 years. Once you fail up to Dir/Eng, you're basically tenured.

Since almost nobody qualifies candidates effectively, failing up is a super powerful strategy for this field.


Why are the terrible engineers terrible, leading to them being fired (after 1-2 years?)

I've not hired anyone who turned out to be bad at coding, but I have hired people who had other performance related issues, which were difficult to suss out during an interview.

From my own personal performance, my approach to interviewing has led to far more true positives, than false positives, so I am biased towards continuing.


> other performance related issues

What are some examples of these?


Typically it's patterns of repeated performance issues, such as providing estimates then routinely not completing work on time.

Or, failing to complete tasks they've taken on, requiring other teammembers to step up in order to meet deadlines.

There are more extreme cases, like insubordination, theft etc, but they are pretty rare.


> Engineers whos skills range from Great to Mediocre don't have a problem finding a job.

How are you defining "skills" in this case? "Skills" as in the skills needed to perform well in the traditional technical interview gauntlet?

If that's the case then you have a tautology on your hands.

And if not, then maybe it'd serve us better as an industry to try to find a better proxy than the traditional technical interview for whatever definition of "skills" we really want to optimize for.


Skills: writes code that's clean, efficient, simple, etc

My question is this: Why does the industry need to change the interview process?

If companies are able to maintain a quality bar of it's employees, and engineers are able to find meaningful employment, doesn't that mean the system, even if it has a flaw (false negatives) isn't really a problem?

Perhaps the problem is that many of us who write code, consider ourselves good at it, and getting rejected is a hit to our ego. We want to change how interviews are done solely for the benefit of our egos, when instead, we should understand that if you interview, you might get rejected, and that is okay.

Nothing is fundamentally wrong with the process, nothing is fundamentally wrong with you, and in 6-12 months, you can try again.


> Skills: writes code that's clean, efficient, simple, etc

> My question is this: Why does the industry need to change the interview process?

Because at the end of the day the process makes very little effort to optimize for any of those metrics of code quality you just laid out, and purposefully offers candidates programming problems and working conditions that deviate significantly from those they're likely to encounter on the job for no apparent reason, thereby hampering its reliability as a useful proxy for developer skill even further.

> If companies are able to maintain a quality bar of it's employees, and engineers are able to find meaningful employment, doesn't that mean the system, even if it has a flaw (false negatives) isn't really a problem?

No, it really doesn't. Just because a majority of software engineers manage to stay employed, doesn't mean we should ignore flaws in the system and insist that it's not a problem.

And if you would, take a moment to consider widening the definition of "companies" you have in your mind when you made that assertion. Yes, the process works well enough if you're Google or Facebook and have a constant stream of candidates and can accept just about any arbitrary false negative rate because you have the law of large numbers on your side when it comes to hiring. But startups and many non-tech companies large or small (most of whom don't have a lot of developer mindshare) often won't have that same luxury, because every month wasted in the hiring process to a high false-negative rate will take you that much closer to the end of your runway if you're a startup, and to the eventuality of being out-competed by a more efficiently run, technologically innovative competitor if you're just another technologically stagnating company. Countless numbers of these startups and technologically stagnating companies die off every year, and likely in no small part due to their inability to attract, recognize, and hire skilled developers in a timely fashion.

> Perhaps the problem is that many of us who write code, consider ourselves good at it, and getting rejected is a hit to our ego. We want to change how interviews are done _solely_ for the benefit of our egos, when instead, we should understand that if you interview, you might get rejected, and that is okay.

I don't doubt this is _one_ aspect of why some people want to change the interview process, but I emphasized the _solely_ part because I take issue with that part of your assertion. Getting rejected hurts, sure, and that can certainly bias one's judgement towards wanting change for their own benefit. But that's certainly not the _only_ driver for why people want to change the traditional interview process. You can see countless well-reasoned, rational cases, that make no appeals to emotion, for the many issues with the traditional interview process, both in this thread and elsewhere, made repeatedly by people from all walks of the industry from both sides of the interview process. Some people definitely do want to change the interview process for purely selfish reasons, but that shouldn't invalidate any of the countless well-reasoned cases constantly being made against the traditional interview process.

> Nothing is fundamentally wrong with the process, nothing is fundamentally wrong with you, and in 6-12 months, you can try again.

I would offer the counterpoint that, if nothing is fundamentally wrong with you, and you can get accepted again simply by applying again in 6-12 months, without having made any significant improvements to your skill level, then that is a clear indication there's something fundamentally wrong with the process itself. It all comes down to the fact that the ideal technical interview process should optimize for vetting for developer skill, and do so in a reliable fashion, minimizing both false negatives and false positives.

The traditional interview process has been well known to to exhibit a high rate of false negatives. If your company is deep-pocketed enough and attractive enough to candidates that you can afford the false positive rate by virtue of your naturally high rate of hiring, then by all means, keep using the traditional interview process. This is a process that has been proven to work well enough for companies hiring under those circumstances, and it's likely that for you, the risks posed by the high rate of false-negatives won't outweigh the risks resulting from adopting one of the less well-adopted and well-studied interview processes.

For everyone else, however, I'd strongly advise very carefully weighing how much risk you're really taking on by accepting the high rate of false-negatives that come with the traditional interview process, and consider one of the many possible alternatives to that approach before jumping blindly into that calgo-cult.


Anecdote, but a telling one: in my previous round of job-hunting, I had a success rate of exactly 0% with whiteboard coding interviews and a success rate of 100% with other interview methods. That's a pretty stark correlation.


Is there a case study somewhere for such an interview? Like a description or a video or such?

Also how do you deal with the standardized interview being published? As in 'company X' uses a live coding exercise and they ask you to do X in a succession of changes. It would quickly come out for most larger companies.


You're smart people. I'm serious. For any of these types of concerns, you will come up with reasonable answers. My only advice is: don't overthink it. Solve problems when you know you're going to need to solve them, but err initially on the side of making things less onerous for candidates.


> If you ask the candidate the define the requirements for the task you're going to give them, you can't be surprised if the candidate imagines a use case with entirely different requirements from what you expected!

This section really resonated with me. I've had a few interviews where I came up with a different answer than what the interviewer expected and this resulted in a lot of tension, as the interviewer asserted they were correct with zero exception. This often led to a lot of dead-end scenarios and the interviewer trying to get me to see an alternate solution without telling me explicitly.

I once had an interviewer tell me that O(k) was constant if k was a config parameter (I disagreed because in practice, k would need to be tuned as a function of the size of the data, and it would be like saying O(n) is constant if you just fix n = 1M), and an interviewer tell me that you could solve an optimization problem without an objective function (in actuality, he had an objective function, he just thought it was obvious and didn't want to specify it). In both cases, I understood what they were trying to get at after they explained their solutions, but in the process it was incredibly frustrating as they seemed to just disregard my answer all together.


I love interview horror stories! In my most recent job search, which was early this year, I debated with one interviewer whether checking for key existence in a hash was a constant time operation. I insisted it was, and the interviewer insisted otherwise with the same amount of conviction. That was just a small portion of an all-day onsite interview. After leaving, I was told they were looking for a more senior candidate. I was not upset.


Could just be a terminology mismatch. Checking for hash existence should always be constant, but checking for key existence is generally only amortized constant (that is to say, not constant across all lookups) - he may have wanted you to talk about addressing schemes and degradation at high load factors?


> checking for key existence is generally only amortized constant (that is to say, not constant across all lookups)

What you just described is "average" not "amortized". Amortized does not mean "across a set of representative inputs", it means "through time, even if given the same input repeatedly".

Amortized complexity is tricky sometimes. For instance, in a data structure storing a binary integer and allowing an increment operation, the number of bits flipped is amortized O(1). The same is true if the only operation is decrement. Neither is true if you have both increment and decrement!

Another example: you can change the accounting to make things amortized lower cost. For instance, a heap (aka implicit priority queue) can be said to have O(1) deleteMin by simply ascribing an extra O(log n) cost to each insert. In fact, you can say deleteMin has O(0) cost! An example of this being done in the literature is https://arxiv.org/abs/0903.4130, "Pairing Heaps with Costless Meld", by Amr Elmasry.


There was no subtle argument. It only came up because I was on the whiteboard solving some simple word game that involved matching user input against a list of valid words (e.g. the Scrabble dictionary). When asked to explain the running time of the various operations, I said that we can store the dictionary in, well, a dictionary, and checking whether the user input is a valid word will be a constant time operation. The interviewer insisted, with no subtle argument about amortization or collisions or addressing, that this was in fact a linear time operation.


So likely thinking of it in terms of checking a tree based on characters instead of hashes of full words. Which is more viable would likely depend on what kind of memory constraints and dictionary size was being considered. Also very implementation dependent, what's the size of the hashes generated?

Edit: curiosity got me looking at this for Python, with an answer of "it depends, and both have correct aspects" based on Python's implementation [1]. The hashing varies by string length, the lookup is fixed unless there's a collision and probing, probing will be minimized by growing the dictionary as items are added. In any case, better than O(n) but worse than O(1) and with some possible slowdowns during loading unless you can set an expected final size during initialization.

Edit again: since I brought it up, turns out initialization doesn't really matter, resizing is not a speed factor. Relevant discussion starts from https://stackoverflow.com/questions/1298636/how-to-set-initi...

[1] http://www.laurentluce.com/posts/python-dictionary-implement...


Indeed, my best guess is that the interviewer had a mental check list and simply wanted me to say "prefix tree" or "trie" or similar. I suspect the debate over running time was just the interviewer's attempt to get me to say the magic word he wanted to hear. I also suspect the interviewer did not know much about hash tables or tries, or was so nervous and misguided about interviewing that the information was not readily recallable.


It's also possible the interviewer meant linear time as a function of the length of the word, which would be true. Regardless, not being able to express this clearly is a sign of a poor interviewer which unfortunately the candidate takes the hit for.


That was certainly not the case. Multiple times the interviewer "explained" that with my approach you have to run through every entry in the word list which is linear time. My guess is that the interviewer wanted me to mention a prefix tree of some type, and was completely blind to any other approach.


My favourite was an interview (in Java), where the interviewer said I needed to process some data (something abstract like a "record") and that I could assume I get my data in a "var". I asked what sort of data structure the variable held. He said "It's a var. You know what a var is, right?"

So many problems...


It can also depend on whether you count the O(k) time needed to hash a string of length k.


Wouldn't that argument be easily settled by discussing the implementation of a hash table?


I discussed it briefly. My interviewer was unpersuaded, and the other interviewer in the room (who wasn't performing this interview) suggested we just move on. I believe the unconvinced interviewer was simply a very junior developer and an even more junior interviewer. He seemed quite nervous.


One of the worst interviews I've had the interviewer was not only similarly incorrect, but then afterwards proceeded to try to give me advice on interviewing skills, and how I should've "taken his hint" and been "more amenable to other answers". I probably would've been open to alternate answers if they were, y'know, factually correct.


If you think that's bad, I was this guy[1].

tl;dr:

1) Interviewer insists that an operation must be n^2 "because it has a nested loop" even though the nested loops are only iterating over their own subset of the master string.

2) Interviewer refuses literally any test that would help decide which of us is right.

3) Interviewer turns out to not even understand the concept of resolving disagreements by appealing to agreed-upon facts, insisting that he would only be convinced by the absence of a nested loop (i.e. talking about some other algorithm).

4) Interviewer is five-year backend lead.

5) Interviewer vetoes me from the rest of the process purely because of that disagreement.

6) Because I thought I was going insane, I implemented it at home and ran the problem with different sizes and the run time with up as O(n) like clockwork.

[1] It's okay, I'm fine with people knowning and have outed myself elsewhere: https://news.ycombinator.com/item?id=6070001


> Interviewer turns out to not even understand the concept of resolving disagreements by appealing to agreed-upon facts

That's a wonderful description of an attitude I have encountered many times.


Thinking you are awesome because you know the answer to a question you have prepared for and asked many times, is one of my minor irritant


> 5 minutes of resume discussion > 45 minutes of coding

As anyone else found that a good resume discussion is often much better than the coding component? I've concluded that implementation difficulties, decisions made (tradeoffs, technologies, etc.) and the collaborative environment around projects, are much better signals than the code part of my interviews.

If you've dealt with a broad range of tech and can ask the right questions, it's relatively easy to get a strong signal from a candidate by having a deep discussion on their work on prior projects. Both are needed and very useful.


Yeah. If they can't tell me about what they've built before, and aren't a less-than-one-year-away-from-school new grad, I'm going to be wondering how they deal with real problems regardless of whatever they can do algorithm-wise.

If someone can talk about what approach they took, what other approaches they considered, what made it hard, what made it easy, how often the requirements changed, etc, then I'd rather ask them questions about that then spend the whole time in code. Just do some small spot checks to make sure they aren't taking credit for the work of others.


> Just do some small spot checks to make sure they aren't taking credit for the work of others.

Agreed, asking tough technical questions on this front gives a really solid indicator of a 'bullshitter'. For example, I often ask people to diagram a prior project architecture they worked on, and then ask them very fine details of how things work and try to challenge them with alternative approaches to see how they react. I get to experience working with them during the interview through which I learn if they're smart and capable.


Absolutely. If a person can't tell good from bad with that, they shouldn't be interviewing.


Here is how to do a good coding interview if you are a recruiter:

1. Do your homework: know what projects drive the interviewee (what kind of things she likes to code, what recent projects has she been involved in), make sure they are a match for your company projects and goals

2. Ask the interviewee for a git repository and take a careful holistic look at it (commit messages and code, timespan, branches, merges etc...)

3. Go for a 2h lunch with the interviewee in a relaxed environment to check speak communication skills (talk about whatever you think is interesting, if you can't have or don't know how to have 2h lunches you shouldn't be interviewing, check your life first and learn how to enjoy it before doing interviews).

4. Stop asking programmers to write code in 40 minutes. Learn how to respect before expecting respect in return, if you didn't take the time to read through their code repositories then don't ask for a 40 minute code problems.


Not everyone has a public git repository, let alone one that they commit to regularly. You're selecting for people that have tons of time outside of work and want to spend that time basically doing the same thing they get paid to do at work. Also, if they're writing for themselves, what makes you think that it would be production quality rather than just a collection of semiuseful hacks they want to be able to download occasionally?


I think you missed the part where the OP said not all candidates are willing or able to work on open source projects.


Well the .2 "Ask the interviewee for a git repository" can be a private repository. It is a very hard problem to solve if they have no visible code to show but it does happen in other areas. Imagine asking for a management candidate for the private e-mails he shared in certain tricky situations of previous jobs. Or even public e-mails. Or even to take 40 minutes to write in a whiteboard a team statement for a certain conflict scenario.

Trust is entailed when recruiting in most areas. But in programming not as much.


> Well the .2 "Ask the interviewee for a git repository" can be a private repository.

If someone shares a private git repo, they should not be hired, and in fact should be fired from their current job, and possibly sued. (Think Anthony Levandowski sharing Google lidar designs as an example of his work with Uber.)

This isn't promoting trust. This is a violation of trust.


Yes I agree with you but I was not making the point of sharing a repository you shouldn't share to a recruiter (I do own a few private repositories that I wouldn't mind sharing to recruiters).

My point was that seeing code is not as mandatory for a recruitment as we currently make it so. I gave the comparison to management but it happens even in engineering areas. It is worth to know your candidate before asking to write a solution to an air conditioning problem in a whiteboard in 40 minutes when applying to a mechanical engineering position. Specially if your company never solves air conditioning problems.


Interviews suck for both parties, a world without interviews would be a better world. Why can't we just hire someone based on his experience and projects he previously worked on? You can have a first month as a "trial" where both parties can cancel the contract.

Why interviews suck is because they are very, very biased and subjective and also not a good way to estimate the actual performance of a candidate.

Say no to interviews! If you like what someone did, hire him. You can't get to know a person after 1 or even 10 interviews, because the interview environment is totally different from the work environment.

PS: I have been interviewed by 14 different people at Google. I didn't get the job. The overall conclusion was that some interviewers loved me, my skill and my work, others didn't like me from the start or simply didn't care about me. Some of them had a good day so they gave a good "review" even though I made mistakes, some of them had a really bad day (one fell of the chair) and gave me a really bad review even though, related to the actual interview questions, I did really well.

(14 interviews = applied twice, 2 x (5 on-site + 1 online), and they also wanted 2 extra interviews to take a decision)


I'm sure most businesses would love that situation, but for employees it would require you quit your current job, but have a good chance of being fired after a month at your new job.

I imagine young employees would be happy (I did a bunch of short-term contracts myself), but once you have a mortgage and kids, it's nice to be a little more certain about your new job, before quitting your old one.


> I imagine young employees would be happy (I did a bunch of short-term contracts myself), but once you have a mortgage and kids, it's nice to be a little more certain about your new job, before quitting your old one.

And just to make this really explicit - this means that a company with this hiring practice would have a very hard time hiring experienced competent engineers. If I were looking for a job, I'd have lots of options, and I would not waste time on a company that demanded this.


As a hirer at my current company I would love to end interviews but far too many candidates have resumes that look impressive but they then fall short on pretty basic code scrutiny. The ease of appearing competent due to previous successes (whether by team support or outright luck) make interviews a necessary evil.

You would be amazed the number of candidates we get with solid work history who submit code samples containing red-flag issues like SQL injection vulnerabilities. We always provide helpful feedback and politely decline these hires (we are a small time and we all learn from each other but I can't start off a senior-level hire who doesn't understand the basic web security) only to find out that they've been placed at some other company for their sought-after senior/level salary.


The effort required to get someone new spun up on your project isn't trivial (unless what you're doing is trivial...). There is a lot to understand, both "hard" knowledge about the tech stack, product, specific features you're working on, customer use cases etc as well as the "soft" stuff, like communication style, the specific rhythm of the team and more often than not, reading a bit of politics. Never mind that this is stressful for the applicant (and would require them to quit their previous job for a perfectly insecure alternative), it takes a substantial effort on part of the existing team to do this.

A commonly mentioned number for application success rates are on the order of 1:100. Even if this process is 10x better, a team will have to welcome a new "apprentice" every month (and fire most of them again) to get a single new hire a year.

This is not obviously an improvement for either employer or employee.


> Interviews suck for both parties, a world without interviews would be a better world.

Actually, I rather like interviews. They typically pose some fun programming problems and I have a chance to talk about a lot of the cool work I've done over the years.

Honestly, I do think the interviewing process is very risky from a company perspective (it's insane to me that companies will invest tens of thousands of dollars into someone based on essentially an afternoon of conversations), but it's perfectly great for some candidates.


I didn't say I didn't enjoy the interviews, I loved most of them where the interviewer was actually involved.

That being said, that's my point with interviews, if you don't trust something that the interviewer wrote on his CV (right 20 years experience and a history of successful products) then how can you decide to hire him or not after only a few hours of talking to him? You could have a bad day, he could have a bad day, it might happen that he knows exactly the question he has been asked or that he knows everything else EXCEPT that question. Usually it takes months of working next to someone to know how good he really is.


Some points are quite good, and in particular the author calling out the rubbish of asking "neat little algorithmic" questions (e.g. asking things like "implement mergesort").

However one thing I'd point out is that this guide still falls far short of being a good guide for finding engineers. Particularly noteworthy, in my opinion, is that the measurement criteria are still applicable only to the coding question itself. For example, "how well the candidate manages time" still applies only to this limited 40-60 minute question. It doesn't really help evaluate how the candidate might consider prioritization of tasks (implementation or otherwise) in a broader, bigger sense.


Here's another disadvantage of whiteboard coding: it's inaccessible to blind people, and impractical (even more than usual) for people with only a little vision (like me). Such candidates will need to do the coding on a computer, so one might as well make that the default, as the OP recommends, and eliminate one variable and a possible source of bias. Of course, once a blind or low-vision programmer stays at a given company long enough, they'll probably be on the other side of the interview at some point. So for consistency among a team of interviewers, I imagine it would be best if all of them have their candidates do coding problems on a computer, not just one of them.

I'm curious about why whiteboard coding is still a thing in the first place. Is it a matter of convenience or cost? If so, surely that equation has changed now that we have ubiquitous, cheap, portable computers. Or do some interviewers really think that it's good to deny a candidate access to a real computer while solving a coding problem during the interview?


These places use the computer for interviews already for the visually impaired. It's just a name for a type of interview.

I even give people the option to use the computer if they are more comfortable with that or a whiteboard. It can be 50:50 in who choses what.


Those white board stuff reminds me when I was a CS student in the beginning of the 90's in France.

We had our exams on paper. And believe me, writing LISP on paper, knowing if you forgot one parenthesis you got 0 point ("if I type it on a computer, it fails !"), was pretty and unnecessary stressful.

So I relate with candidate facing that. Dude, on your day to day work, you WILL have a computer to check those !

How can you decently check those skill on a white board ? I totally agree with what's stated in this post.


If you think writing lisp by hand is bad, try implementing a linked list interface in java on paper... shudder. (in fairness, we were sanely graded, minor syntax errors didn't subtract terribly from the grade...).


Writing code for tests, on paper, has been a thing in CS for quite awhile. While writing lisp for tests, I used to sometimes write the code with no parens (using indentation to denote nesting). Then I'd go back and add parens where appropriate.

    defun add-pos-only v1 v2
        let sum 0
            if and > v1 0
                   > v2 0
                + v1 v2
                nil


No good interviewer is grading whiteboard coding on whether you forgot a parenthesis or semicolon.


A year ago I wrote a (shorter) post on the same topic: https://smarketshq.com/notes-on-interviewing-engineers-a4fa4...

A reasonable question should never require the candidate to know "just a single trick". Questions that require to combine approaches are far better - they can actually measure if the interviewee can reason about the problem and identify the points for trade-offs.

Btw: if you're interviewing more senior candidates, a code review question is often a good idea. It turns the situation around, from "how do you build this?" to "how would you fix this?"


Being a better interviewer is important.

As stated, some types of questions aren't good judge of skills, like implement mergesort. If you can implement mergesort, you probably studied up for your interview. You're not testing anything, other than did the candidate care enough to study.

Interviewing should test the candidates ability to write code that is:

1. Correct (solves the problem)

2. Efficient (solves the problem without unnecessarily wasting resources)

3. Understandable (goes hand in hand with maintainable)

4. Maintainable (can be modified by another developer, or even yourself after a few months)

Coding on a computer can lend itself to all of these (no brainer there), but you lose a key aspect of whiteboarding that I think is really important--how a candidate approaches solving a problem.

Knowing how to implement merge sort, or memorizing every possible linked-list question isn't all that valuable--but having a concrete approach to tackling problems is, and I want to understand that process from my candidates. I want to know the design tradeoffs you make, how you think about optimization, how do you reason about constraints as you learn about them.

It seems that explaining your process while coding on a computer would be difficult to do without practice--and if you're going to practice, why not instead just practice whiteboard coding, which is how most tech companies interview.


So do you want someone who is good at explaining how they are looking for the solutions, or someone who is good at finding the solution? For me personally approaching a problem never involves a whiteboard.


I don't think I've ever been at an engineering company that didn't have whiteboards, at the very least in conference rooms--I infer from that that some portion of employees find them valuable.

Clearly someone who is good at solving problems is preferred. However, between someone who is good at solving problems, and someone who is good and can explain/demonstrate their process is more desirable.


I have to disagree with this trend of having people write code on a computer instead of a whiteboard. Sure, it mimics the real world, but in practice you spend a bunch of time dealing with crap about syntax errors, and argument order mismatch instead of dealing with the actual underlying algorithm. Even worse, you may end up with a candidate that realizes that there's probably a library that would be really useful for this, but then has to spend time googling for the name of the library, then installing it, and then desperately skimming for a quick start in documentation because the question isn't, "Can you work google and run pip install?", but rather some other task. So now you've probably lost 15 minutes just dealing with that shit.

If you're going to do this, it simply doesn't work in a 40 minute setup without some sort of provided scaffolding and test cases.


Is doing interviews in an IDE and expecting fully functional code actually a thing? My company uses a simple text editor: perfection is not expected, and it's still way better than a whiteboard.


I can confirm that this is a thing at multiple companies. Instead of of testing the actual algorithm, you'll encounter such blasts from the past like: "Network Socket Debugging", "How The Fuck Does Java Read From STDIN?", and everyone's favorite, "Why The Fuck Did You Give Me Text Data In CSV Format?"

I have seen this go well exactly once. And they provided an environment with a working (albeit do nothing) scaffold along with test data. Also it was like a three hours in a room.


> Hopefully in your realistic setting, the challenge comes from the difficult tasks you are given to solve, not some jerk that's sniping at you from the sidelines.

I wish I could say that's true. If you're getting sniped from the sidelines in your interview, it's probably a part of the culture where you are interviewing. So it's quite likely to be a non trivial part of the problem you're solving.


There is no reason to haze potential employees.

Just fire them if they don't work out. You set expections by telling them up front they are on probation and that they will be fired if they fail to live up to their resume.

Some of the worst programmers I ever worked with could pass hazing interviews like a champ. Whereas some of the best would refuse to step foot in companies with a reputation for hazing their prospects.


That's a funny term to apply to this type of interview process, but it fits. Kudos.

Agree on the invalidity of technical interviews. We hired a guy who correctly answered every question I asked, but he could never ship a product. He always got distracted by silly things. He was great at solving tough little problems, but never could grasp the big picture.


For me I have interviewed many people with a technical test, that was the easiest part i.e determining their competency. The hardest part is figuring out what their work ethic is, can they pull their own weight and will they cause friction in a team, sadly you only find that out often AFTER hiring them.


Small take-home paid project.


I have a lot of thoughts about this particular topic, probably ideas that are unpopular and definitely go against the grain, at least in the way the Bay Area operates.

In my opinion, if you bring a candidate on-site for a 4+ hour interview and you ask them to write code, you've failed to properly vet the top of your recruitment funnel. As a software engineer, writing code is the BARE MINIMUM of your job. Imagine hiring a construction worker by merely asking them to hammer a nail for 4 hours.

You should have a thorough understanding, through various means such as phone screens, code samples, portfolio work, and simple tech challenge, to determine whether they can write code before you even bring them on-site. Once they're on-site, you should quickly verify that they can actually do what you previously learned that they can do. I'm talking 15-20 minutes worth of time. The remaining time should be 100% on the "soft skills" and other things mentioned, including but not limited to my big criteria: "Can you quickly get up to speed and learn the things you don't already know?" and "Do I want to sit next to you for the next 6 months of my work-life?"

If your on-site interview includes 45 minutes of coding (much less multiple sets of 45 minutes of coding!), you're doing it wrong. Very wrong.

Keep in mind you've asked your candidate to effectively take a day off work from their current job. They're calling in sick, taking a vacation day, or somehow lying to their boss and taking a risk of not being in their office when they're in yours for the interview. If they're smart, they've lined up multiple interviews with multiple companies, because putting all their eggs in your basket is just a terrible idea. Imagine having to take four days off to interview with four companies. And they probably aren't four days in a row, so you can't just pretend to have the bubonic plague or something.

You're asking them for a huge commitment of their time. And if you're willing to kick out a candidate because they didn't know how to invert a binary tree, then you should have found that out before they even walked in the door and saved everyone a whole bunch of time.

IMO, an interview should be a max of 4 hours. Any more than that and the candidate is simply repeating themselves over and over to different people, potentially solving the same tech problems (that should have been evaluated beforehand), and generally making it a huge waste of time.

My two cents. Happy to debate these ideas with anyone interested.


All his example interview questions are stuff that were actually used at Dropbox!


His resume (http://www.lihaoyi.com/Resume/) suggests he worked at Dropbox for quite a while. :)


Mostly makes sense but your ability to write a parser is still going to depend on how many parsers you wrote or when you wrote them or whether you are able to use a framework/tool to do it.


What are some good resources for developing good algorithm skills for a non CS grad? Examples in JS, Python or Go preferred.


There's lots of stuff online.

Check out hackerrank.com for example. There are other sites like that out there.

Its becoming a real cottage industry, this whole IT interview prep thing.


While there are a variety of ways you can conduct a programming interview, the goals typically are the same:

Will the candidate be able to write working code if they join the team?

Can the candidate discuss code and problems with the people they'll be working with?

Can the candidate reason about arbitrary problems and constraints?

Is the candidate someone we would enjoy working with?

All of these can be solved directly by asking the candidate if they'd be comfortable doing an actual task on the actual codebase, having them sign an NDA, and then giving them a bugfix to perform. They're compensated for their time, of course.

I'm doing it right now as a candidate and it's by far the best interview experience I've had. The others were all artificial.

A lot of people have a problem with this model for one reason or another, but in terms of effectiveness it's probably at the top 1%.


Almost. Instead of doing this, take an old set of bugfixes from the tree, 1-3 of them, give the same bugfixes to every candidate, and come up with a standard rubric for judging the fixes. (If it's an open source project, synthesize 1-3 bugs, using the history of bugs on the project as a guidepost).

This is a "work sample test", and it's the gold standard for judging technical qualification. Firms should rely on them to the exclusion of subjective interviews.

Within reason, to a first approximation, every candidate gets the same problems (you may iterate over time). You can judge candidates apples-apples.

You don't need to pay candidates for their time, because they're not doing work for you; you already fixed these bugs.


Yeah, I'm a big fan of your approach. Unfortunately it seems quite difficult to get anyone to do it. Most companies have interview pipelines that are well-established and immutable, so only new companies are inclined to try it. And even Triplebyte unfortunately fell back on a combination of abstract coding tests and amorphous "cultural fit" criteria.

It was a neat hack as a candidate to propose to the interviewing company that they do essentially the same thing as a work-sample test and hear them say yes. That only works if you propose they let you try your hand at doing their actual work. I wonder if it can be generalized?


I think asking candidates to do "real" work is bad for a bunch of reasons:

* It's intrinsically exploitative, even when compensated, because real work done under similar terms is paid far more highly than prorated salary (it also does weird things to salary negotiation, by extracting an up-front concession on rate).

* It requires different candidates to work on different problems, which means you're not getting a reliable record of how candidates do on the interview so you can tune it over time.

* Because the best candidates will generally refuse to work under these terms, most firms that do paid up-front work have a fast-path for elite candidates, which is an obvious process failure.


did you read the grandparent? you use the same bug by checking out a commit before the bug fix was applied.


Triplebyte's flaw is like that of many other take-home tests: they favor people who lied about how long it took.

When I did it, I had to implement a regex parser. I knew intuitively that the state machine approach was best, but if I read up on state machines and coding them, I would bust the 3 hour limit, so I did the best I could to deliver something working within the three hours.

Feedback: "Obviously the state machine approach is better, which you didn't use. You're not qualified."

Guess I should have just lied and gotten placed like the ones who did.


The state machine approach can't handle perl-compatible backreferences (since true regular expressions can't use them), which apparently some people want. That's feedback you can defend yourself from to some degree.

When I implemented a regex parser for Triplebyte, the time limit was much longer than three hours, and I did use state machines. The feedback I got was "we thought you wrote a great, full-featured regex parser, but we need someone who comes off better in interviews".


You can already see that this is a broken technical qualification process. If you can "defend yourself" from a demerit on the challenge's rubric, you have a subjective process. The smooth talkers will glide past regardless of their abilities, and the gifted developers that don't have interview skills will get washed out by it at random.

If there's a point to outsourcing technical qualification, it's exactly to avoid this kind of problem!


Time limits on these kinds of things don't make much sense to me.


> And even Triplebyte unfortunately fell back on a combination of abstract coding tests and amorphous "cultural fit" criteria.

I'm not sure when you went through Triplebyte's process, but I did recently and it definitely included a substantial practical programming section which involved writing an actual program and some debugging.

They do have the difficulty of not controlling the downstream on-site interview process, so there's a limit to how much they can innovate. Ultimately their goal is to find candidates who can pass company interviews, some of which are still heavy on algorithms.


I also agree. As someone who has recently co-founded a startup, this is something my co-founder and I have spent a lot of discussing and exploring.

My personal experience with this style of interview was that it really helped me to personally demonstrate my abilities. I was interviewing at a startup in San Francisco in 2012 who wanted me to do a quick version of "battleship" as part of their hiring exercise. I proposed, mid interview, "how about I help you solve a performance problem with your current system. What's your biggest issue right now?" My interviewer actually showed me, and we ended up solving the problem together and then deploying out the change.

The funny thing about that experience was that when we were done improving their platform, the VP of engineering was told what I did, and then had the interviewer go back and do the battleship exercise anyway. They needed to "benchmark" me against other candidates, and the only way to do that was with a consistent "test". I did far more poorly with the battleship exercise than I did helping them on their platform.


Can't really blame him, it might seem kind of arrogant to refuse to do the task they give and propose one yourself, although it is pretty impressive that you could do something useful on a codebase you haven't seen during the code interview.


It's also important to make sure people are well-rounded. If one can tweak an architecture to put a queue in the right spot, she/he may still be the person that cannot properly implement simple concepts like "separation of concerns" or forgets input validation or security checks.

At the end of the day, with start-ups I really value that mix of system and hands-on code. Without initial hires having both, it's harder to build a solid system while iterating so quickly. Amazing people skills are also key.

Over-leveling and bad hire situations are no good for anybody.


> They needed to "benchmark" me against other candidates, and the only way to do that was with a consistent "test".

If your employer were ever sued by a candidate for discrimination and it came to light that you had received different treatment than other candidates, some very uncomfortable questions regarding the reason would be asked in the courtroom. An unfortunate bit of fallout from the litigious environment we live in today, but it is what it is.


Wouldn't it be (well, look) even worse if any number of the non-hires dud great on the battleship test?


It's unfortunate, isn't it? We spend all this time constructing artificial tests when the real thing is all that matters.


They're compensated for their time, of course.

Taking on paid work for another company, possibly a competitor, would put nearly anyone who did so in breach of their contract with their current employer, as well as complicating their tax situation. Not to mention the vacation time they would be burning on it. You will only be able to hire freelancers or people currently unemployed by this method. Do candidates set their rate, or is it just a token amount?


In my case, I offered to do it for free. (It wasn't their idea to give me a task on the actual codebase.) Almost nobody would be fine with that, but I want to work there. The other option was to hope that none of the other candidates seemed like a better choice. It's good to lock in an opportunity.

Perhaps it's unworkable in the general case. That would be unfortunate; it's the most effective type of interview.


I think it would be prudent for the interviewer to just say that "of course we will compensate for your time because this takes a couple of hours". If you happen to have a contract that forbids you from doing that, just say "Can't take any comp, I'm happy to do it for free". I don't think the key is compensation, but at least not assuming the subject will work for free. If you are the interviewer in that situation, give them the t-shirt and mug.


Unless your company was exceptionally unique in some way I'd probably just pass instead. It's awfully one-sided: you're just sending me some code and I'm supposed to invest hours on it. At least with an onsite interview our priorities are aligned: neither of us want to waste more time than we have to.


Yeah we all have different taste in interviews, I'd much rather spend 2h working for free than writing som tree traversal or fib function on a whiteboard for ten minutes. Interviewing is hard.


Is it? I feel like it's a skill you learn once when you're green and never have to think about again. Whenever I'm starting a new job hunt I just dust off my copy of CtCI, do a few practice problems on hacker rank, and I'm good to go. Yeah, it has almost nothing to do with my actual job but I've never felt it to be an exceptionally onerous process.


I used to think that, but honestly after recently going through a job search I think it's untenable for most scenarios. As a candidate, I want to be able to interview with multiple companies simultaneously—if I had to do a trial week with each company, I'd never be able to get multiple parallel offers.

There's also significant onboarding complexity which I hadn't factored in. One company had me do a trial period and it took me half a day to get productive—time which is a sunk cost for them (since they had to pay for it) and made me feel bad about my lack of productivity. In the same time period it took to onboard with their codebase/build tools, I could have easily completed a full interview loop with another company.


True enough, but it sounds like the above model isn't for you. It's for everyone who is talented but doesn't have the option to go interview with multiple companies simultaneously. Most devs are in that position.

I'd much rather commit to a single company that I want to work at and do this process to put me ahead of the other candidates. But that's not a decision that everyone feels the same about.


I get your sentiment. But this is quite problematic if you already have a job. It's not so easy to take a week or month off on short notice. What happens if you don't get your dream job? You can't just quit your current job.


pair programming should cut the startup time.


It would also substantially increase the cost of this interviewing method.


This is 100% the "best" way to interview/hire a person. A small company who screens someone for basic competence, then does this is great. It does has its drawbacks though

1. Requires a significant time investment on your part

2. Isn't the most efficient -- if you have someone work a week on a project, and decide they're not a good fit, you just lost a week.

3. You can't really have multiple candidates for the same position coding at the same time. They would feel like they would be competing (and really, they would be)

4. For the reasons above, this would never work for a bigger company.

By the time they are doing their week-long "trial", you should be pretty confident through screens anyway. Just ask yourself this -- have you ever said "no" to a person you've brought in to work on site?


> Just ask yourself this -- have you ever said "no" to a person you've brought in to work on site?

I have. I've done plenty of trial weeks where the candidate didn't work out. They got paid for their time, but we didn't extend them further offers.


This is an interesting candidate-first approach, but I have some concerns:

This requires that candidates are experienced at the language or framework you use. Are you not prepared to take someone with different experience? What about graduates? What if their ability with one framework does not transfer to another that you end up assigning them to?

Not all bugfixes are equal. If you haven't solved it yourself before, how do you know how hard it will be? If you give candidates questions that vary in difficulty, how do you compensate for that?

Is the ability to dive into a codebase and fix (presumably) trivial bugs really what you should be optimizing for? Is it actually better than abstract algorithmic questions for determining if the candidate [can] reason about arbitrary problems and constraints?

How would you overcome your unconscious biases when answering the question Is the candidate someone we would enjoy working with?


After I proposed the idea, they said they liked it and were willing to give it a try. They offered 1.) slight changes to the UX, 2.) add a payments feature, or 3.) some back-end bugs that need to be resolved. They asked which of them sounds most appealing as a test.

This was perfect from my perspective, and after I signed the NDA I told them all of those sounded fine. I mentioned that fixing a bug is one of the best ways to get familiar with a new codebase, so I'd probably focus on those, but that I'd like to hear about the rest too. I can probably throw in some of the feature requests or UI tweaks.

That's a bit more candidate-driven than might be expected, but it's really wonderful so far. I feel like I have autonomy, while at the same time they'll get the data about whether I'm effective in the field.

The bias question is important, and I'm not quite sure the best way for us to collectively deal with it. All I know is that there seems to be far more bias in a traditional interview pipeline. One of my recent rejections came after being asked "A hammer and a nail cost $1.10, and the hammer costs a dollar more than the nail. How much does the nail cost?"

Obviously the point of that is to see how you think through a problem. But such problems are useless. They're abstracted from what we as engineers do in the field, and the candidate knows that their entire fate rests on how well they answer that question. Yet those are the kind of metrics real companies are currently using.


> One of my recent rejections came after being asked "A hammer and a nail cost $1.10, and the hammer costs a dollar more than the nail. How much does the nail cost?"

this just got me to see if i can still work through a basic high school algebra problem (yes, slowly, guess and check would've been quicker but less fun).


Algebra is how I solved it. :)


If the candidate isn't familiar with your particular language(s)/libraries/frameworks/etc, there may be a way where you can isolate a part of your process and provide a sample data set to the candidate and give them an operation to be performed on it.


> asking the candidate if they'd be comfortable doing an actual task on the actual codebase, having them sign an NDA, and then giving them a bugfix to perform. They're compensated for their time, of course.

At Google, we call these internships.

Well, more or less -- there are restrictions regarding enrollment in a degree program, the work is for a fixed duration rather than for a fixed scope, etc. But the goal from the company's perspective is largely the same.

I'd be surprised if internships at other companies in the sector don't serve the same purpose.

(obligatory disclaimer about how these are my opinion's, not the company's, etc etc)


doesn't work for those not still at university and there are strict rules in the USA about the role of an "internship" its meant to educate the intern not be a source of cheap labour


I never said anything about internships being a source of cheap labor. Please don't put words in my mouth.

While yes, an intern typically makes less than a full-time engineer, you have to realize that the intern reduces the productivity of the host engineer, who has to spend time training the intern, looking over the intern's work, and so forth.

The real value in the internship program is that it essentially creates a extended interview process after which we can evaluate the individual (given firsthand experience of how they operate in a work environment), and the individual can evaluate the company.


Rather than working on a real issue, I'd recommend a made-up task/feature, modeled after something they might actually do. Just put it in a repo and share to all candidates. It shouldn't take more than half a day's work. Then the next round is all about the candidates presenting their work and explaining their thought process.

It's simulated, but exemplary of real work. And easily repeatable. (Also no need for NDA, etc.) Just requires a thoughtful engineer to develop the task.


I wonder where companies or hiring managers can post their job openings that supports this hiring process sometimes called Trial Employment or Job Audition for Software Engineers. Looks like a great way to get the best hire and also prevent a bad hire, although it can only work for a very limited number of companies and candidates.


You can still simulate that within the more standard interview process. Pick a real bug or small task in your codebase and give that to every candidate. Obviously you'll want to complete the task for real in your codebase, so just keep an interview branch in your source control.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: