Best quote from the entire article imho:<p>> The data suggested that the success of teams had much less to do with experience, education, gender balance, or even personality types; it was closely correlated with a single factor: "Does everybody talk to each other?"<p>> Ideally this talk was in animated short bursts indicating listening, involvement and trust – long speeches generally correlated with unsuccessful outcomes. For creative groups such as drug discovery teams or for traders at financial institutions, say, the other overwhelming factor determining success was: do they also talk to a lot of people outside their group? "What we call 'engagement' and 'exploration' appeared to be about 40% of the explanation of the difference between a low-performing group and a high-performing group across all the studies," Pentland says.<p>> It was important that a good deal of engagement happened outside formal meetings. From this data, Pentland extrapolates a series of observations on everything from patterns of home-working (not generally a good idea) to office design (open and collegiate) to leadership. "If you create a highly energetic environment where people want to talk to each other right across the organisation then you have pretty much done your job right there."<p>So true.
It's been known for a long time that standard interviews are a poor indicator of future job performance, so probably any analyses of data regarding behavioural modes, a person's intrinsic motivators, cultural fit etc, will do a slightly better job. Presumably all of this will still be subordinate to an on-the-job trial anyway, though.<p>It'd worry me that, although this guy says you need to increase behavioural diversity but minimise value diversity, that you're effectively just minimising the pool of potential employees, rather than figuring out ways that a larger pool of people could fit. Or in other words, whether it's Myers-Briggs, Belbin's roles, IQ tests etc, it seems that evaluation tools are trying to quantify the diversity of people, and pick off, with increasing accuracy, the exact archetype that aligns with organisational goals. But if you assume that people are diverse, and that employment roles are diverse, and that there's a large pool of both, would it not be a better idea to focus on quantifying the differences in attributes required for your employment roles, so as to maximise your pool of potential applicants?<p>I'm willing to bet that someone who's slobbish and lazy and unpleasant could play a valuable role if you could quantify the requirements and goals of positions in your company, for example. And does value alignment matter for all roles in all companies, or is it just a phenomenon arising in the last decade, being an intuitive way to maximise employee investment and increase profit? Why would financial staff need to want to 'change the world' or some bullshit to work at crappy Startup X?
Now any analysis applied to a data set is now "Big Data." While easy to just resolve it to being how the media treats most science stories, I had hard time understanding the demarcation. I now go by DevOps Borat's definition - "any thing which crash Excel" [1]<p>[1] <a href="https://twitter.com/DEVOPS_BORAT/status/288698056470315008" rel="nofollow">https://twitter.com/DEVOPS_BORAT/status/288698056470315008</a>
I'm suspicious of this technology. While I concede that it probably has some statistical and even empirical merit, those in HR will always do everything they can to make sure that "troublemakers and journeymen" get kept out.
But unfortunately, most people who have had the same job for a long time are either just trying to pay their mortgage and smiling through gritted teeth long enough to see their kids alright or have no imagination/ambition and those who switch jobs regularly are often simply tired of taking shit from people. I think you'll find that "troublemakers" usually have tried to alert their line managers to the real flaws in the company's systems and been told to shut up. Look at the businesses this data is being taken from, Law Companies, Corporate Banks etc. It is obviously yet another toy to allow rich kids to recognise their own. What has this got to do with those who do the vast majority of the actual work?
This is exactly the sort of thing that I have been absolutely terrified about for a while now. When you're being hired, companies often tell you that they're going to run a professional background check, and I am concerned this may mean they buy data from Google, Facebook, cell carriers, etc.<p>Unsavory political views? Get blacklisted.<p>Don't get along with a relative? Get blacklisted.<p>Indulge a porn habit more than the HR manager likes? Get blacklisted.<p>I cannot stress enough how dangerous this is.
There is obvious up trending of terms like 'big data', 'predictive analytics' and 'data mining'. I have worked in this area since 1998. So here are a few thoughts:<p>- Good analytics (I'll combine the three terms into 'analytics' for the sake of simplicity) requires an understanding of the tools, as well as a significant understanding of statistics so that you know which analysis to pick. But in addition, it requires a lot of creativity (see my examples below) and a significant amount of time to analyze/slice/dice data in a zillion different ways.<p>- This is a huge opportunity. Much, much bigger than people realize and much bigger than past trends of new technologies like client-server in early nineties or web apps of 4-5 years back. Why? Because it has the power to affect business processes very powerfully.<p>- Example 1: I spent 10 months working for a $5B shipping company analyzing data from their Marketing department. I combined it with several hundred global data sources. I worked on over 100 hypotheses. At the end of it, I came up three specific actions that their existing customers take about 6 months before going to a competitor. The Marketing department was thrilled. They spent $17 Million coming with a plan to tackle this. It has been a few months since then; and they have not lost a single customer. This is a powerful proprietary competitive weapon for them now.<p>- Example 2: I analyzed 10 years of power meter reading data for a large utility company. I combined it publicly available data sources of power consumption of major appliances and census data on family composition/wealth for various neighborhoods. I was able to reliably predict the lifestyle of every family, down to whether the person living in the house streamed a movie on Friday evenings and a whole lot more. So the company decided to use this analysis to change their Direct Mailers with very specific, personalized offerings. Their response to the first test mailer sent to 10,000 people? Twenty seven percent!!! They predict that a significant portion of their profits would come from DM's.
Honestly, I think a good tech journalist should be able to critically review what his interview partner claims and not take it for face value, which the author of this article seems to do most of the time. Today, all start-ups that do any kind of data processing advertise themselves as "big data" companies that use "advanced machine learning", but from my own experience most of them rely on pretty trivial algorithms behind the scenes.<p>Also, some of the numbers in the article really make you scratch your head: Achieving more than 95 % accuracy when ranking a large number of student teams in an eight-months long business plan competition, based solely on the results of a simple online questionnaire taken at the beginning of the competition? This just seems too good to be true considering the data sources they have at hand, even assuming that they use the most advanced machine learning in the world.<p>Of course, if you test your algorithm many times at different competitions you will achieve a perfect or near-perfect prediction accuracy for some of them (by pure chance), which however doesn't mean that you can achieve this kind of accuracy consistently (which is where the business value lies).
A few months ago, I met the founder of a company in the recruiting business. They aggregate online profiles of people, both applicants and other people happily employed at their current jobs. Then, based on the combination of their LinkedIn, StackOverflow, Facebook, FourSquare etc, their algorithm ranks folks according to the desired characteristics for a given position.<p>It sounds interesting in theory. That is, until I got to asking about how they quantified softer qualities that employers look for, like an applicant's social skills or potential for a client facing role. Apparently, to determine this, they look at the number of "check ins" people do at locations that are not their home city while employed. Their algorithm assumes that the person is traveling for business and is therefore trusted to meet customers.<p>There are so many assumptions in this one example that it makes me question the integrity of the whole system. An algorithm is only as good as the person designing it. Maybe Evolv really is better than these guys at finding quantitative markers for softer skills, but I remain skeptical.
As someone in the job market right now, I'm refreshed by the article's admission of how much chance plays a role in whether you land a job.<p>I'm currently working on my CV and there's about as much conflicting advice as to what the 'ideal CV' is as there is about what the ideal diet is. It's somewhat frustrating but at the same time it just shows the amount of chance and variation involved in the whole process.<p>At least in the industry I'm going for (games / Unity3D for what it's worth), actions & side-projects seem to speak louder than words.
Today's Dilbert is forecasting where this new approach to staff efficiency is heading.<p><a href="http://www.dilbert.com/dyn/str_strip/000000000/00000000/0000000/200000/10000/6000/900/216903/216903.strip.sunday.gif" rel="nofollow">http://www.dilbert.com/dyn/str_strip/000000000/00000000/0000...</a>
While I like the idea, I don't care for the side effects. If everyone uses a big data based algorithm, and you are an outlier, you will never get a job.
Probably this science is in it's infancy, and as data grows and algorithms are refined, it will go places we can scarcely imagine at the moment.<p>I can hardly wait for the day when a baby is born and "garbageman" or "engineer" is stamped on it's head and it becomes futile to argue with empirical truth about "best fit".<p>Actually, kidding aside I think companies and employees may both benefit from this research if it is applied properly. But if a better form of these tools were available to our current system it would likely produce some very bad effects. I think maybe we have some important decisions about humans and their role in society coming up soon. Because technology never seems to go back in the bottle.
I love the reverse idea here - take an assessment test and then release that data out to the public - presumably as anonymised as possible.<p>Then folks can take the same test and see if they should run screaming from the interview
I stopped at "collecting the data of all our private moves on the internet and applying their logarithms accordingly." Apparently programming is all about writing logarithms.
"Past performance is not a reliable indicator of future results" this expression should be used more often when data has a big human element attached to it.
Another way for big companies to more accurately exploit their workforce under market-based conditions. If this software goes through, the only possible way to get a raise is if the program believes not getting a raise would actually cause you to quit. There's only one reliable way to make the program think that : make sure you actually would quit if you don't get it.<p>In other words: expect a massive increase in job hopping as people find the only possible way to negotiate with this program : quit.<p>> By morning, he says: "If a customer has thousands of people in similar job types, our system can predict accurately on a given day which individuals are most likely to quit." In response, Evolv then offers employers "what-if types of analysis" by which if they change certain incentives – a bonus, training scheme, change in environment – they can see exactly what effect it is likely to have on a particular person's behaviour. In this way Evolv advertises average reduced employee attrition rates among its clients, who include one fifth of Fortune 100 companies, of up to 15%.<p>This sounds horrible. It would force employees into quit-to-improve-working conditions dynamics. Constantly interview, at a non-ridiculous rate. If you get offered better conditions, either Evolv will offer you the same at your current position, or you should quit.<p>Of course that's already mostly true : my advice working for a fortune 100 company that isn't Google or Facebook : prepare to quit after 1 year or less. Regardless of whether you want to stay or not, have a serious discussion with your boss about quitting after 6 months at most.<p>I wonder if it would defeat the negotiation tactic used by "Evolv" here. If you can call it a tactic, that is.
<i>Evolv data undermines certain truisms, among them the idea of the serial job-hopper. "The number of jobs you have previously had," Simkoff says, "and even whether you're employed or not at the time of application, has zero statistical correlation with how successful you will be or how long you will stay."</i><p>The job-hopper stigma isn't about imputed low skill or merit. It's about social status. The person who is presently unemployed has (temporarily) low social status. The person with 5 jobs in 6 years, it is perceived, failed to achieve high social status at any of them.<p>The problem with humans is that most don't make decisions based on value-add potential, but on social status. They see Harvard on a resume and want to hire that person, to be socially "closer" to Harvard. It's not about whether Harvard graduates are better hires or not; that question is irrelevant.<p>Job-hopping might seem like it could be a high-status behavior, in that the best people get bored quickly and always have other opportunities, so they don't put up with abuse. After all, the serially fired job hoppers are maybe 1/10 of that set. It's not so, because the people who make hiring and promotion decisions are in corporate in-crowds, and part of being an in-crowd is the necessary assumption that <i>everyone</i> wants to be in an in-crowd. The job hopper may be individually excellent, and it may be that he'd be a 5+ year employee if given high-quality work and colleagues, but all his paper says is that he never stayed long enough to join a corporate in-crowd, and that even if he was invited into one, he made the "wrong" decision to leave it.