This New York Times article has been doing the rounds on Twitter recently. Titled “In Head-Hunting, Big Data May Not Be Such a Big Deal”, it is a condensed and edited interview of Laszlo Brock, an SVP of people operations at Google. It is apparent to me that the title as well as the apparent nuggets of wisdom present in this article form just the sort of potent combination that invites endless tweets and retweets from the San Francisco/Silicon Valley echo chamber. Well done, Adam Bryant.
However, I find a few too many issues with this article that I am compelled to note. An article mentioning a big data study at a company as venerated for big data as Google naturally leads to wide-ranging conclusions. As I explain below, that would be wrong.
First of all, I read the article three times, but could not find anything emphatic in the body of the article that warrants the rather strong claim in the title that “Big Data may not be such a big deal”. On the contrary, Brock clearly says that just giving managers and leaders a visible set of measurements was enough for them to strive to improve. Yes, there is mention of the (organizational) context being important and the need for human insight, but even the most ardent fans of Big Data have never been heard to advocate jettisoning those. In another place, the article says “I think this will be a constraint to how big the data can get”. I can not help but wonder that the title might have been bolted on by some “social media expert” at NYTimes.
Then there are other questions the article and the correlation study mentioned raises.
Brock mentions that “the proportion of people without any college education at Google has increased over time as well. So we have teams where you have 14 percent of the team made up of people who’ve never gone to college.” Elsewhere, he says that the average team size is 6. Putting the two together, it means that 14% out of 6 i.e., less than 1 of the average team is made up of people who have not gone to college. In other words, Brock is very likely talking about bigger teams. It is not mentioned whether those teams are all engineering/product or contain other departments including some like support which apparently form a large part of Google? Or how many such teams are there? It is a bizarre piece of data mentioned by the article without saying how many such teams might be there.
The article cites a study at Google to detect correlation between interview types and success at Google after 2-3 years. It concludes that things like GPA, test scores, puzzles, brain teasers have no correlation and the only thing that correlates well is “structured behavioral interviewing”. At the very least, this deserves more explanation. Was the study done over all of Google, or only engineering or some other parts of the organization? Does the lack of correlation hold if you restrict to either top 20% or bottom 20%? Should everyone basically just do “structured behavioral interviews”? Now, I am myself a fan of behavioral interviewing (btw, here is a link if you want to read some examples of behavioral interview questions) and include a few such questions in my interviews of candidates at Twitter, but I can’t imagine the whole interview panel of an engineering or product position being of this sort.
Turning to the article’s juicy assertion that GPAs are not predictive of future success at Google, consider this from the article: “Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school.” So, Google still does ask those if you are a “just a few years out of school”? And if a candidate is experienced, I don’t know of anyone in Silicon Valley at least who would go with transcripts and GPAs, so the fact that Google also doesn’t is odd to mention.
My biggest gripe with the article is that since GPAs and puzzles are apparently not predictive in their study, what it leaves us at best is that we are back to square one. We are not told what is predictive so what are we to rely on? Lot of people would jump to the conclusion to not include those in an interview at all. But doing that would be wrong simply because who is to say whatever they would be replaced with will be more predictive…
To be sure, I am myself not a fan of going by GPAs and performance on brainteasers in interviews. I am not even casting doubts on that study at Google and in fact the results of the study look quite plausible to me, simply because in Data Analysis, not finding predictive variables is rather common. I am just distressed at the suggestive tone of the article. It invites wide-ranging prescriptions for how to and how not to interview. New York Times: I am used to better from you.
——
PS: BTW, in my own constant search for the answer to the question “what sort of interview should I do”, here is a recent book by Nolan Bushnell (the founder of Atari and supposed father of the video-game industry, and apparently someone who launched the career of Steve Jobs). Though it seems a bit as if Bushnell might be trying to ride the Steve Jobs’ popularity wave, it still presents many out-of-the-box ideas on ways to go about interviewing, especially if you are looking for creative and exceptional engineers (as I am at Twitter).