I think there is an important lesson in the approach to science that we see in the social sciences. Coming from a technical background we approach science, and data, as being the fundamental way of discover truth, but with humans there are often more than one truth.<p>We’ve seen the effect of it in management over the past 25 years. Today a good manager is expected to approach a team, not by instructing them in what to do and when to do it, but rather by creating a shared meaning through group conversation. It’s more important when you manage people who produce by thinking and being creative, but even at the factory line, this softer approach is proving useful.<p>We haven’t yet applied this to big data. I’m often sold ML as the ability to predict the future, and to some extend that is true. If I look at all the alcoholic families in my municipality and compare their case history with big data gathered on a national level, I’ll certainly be able to predict how many of their children we’ll need to remove. I just can’t predict which ones because determinism doesn’t actually work on something that complex.<p>The more data we have the less we understand about causality, something I’ve learned from history. If you look at the Roman Empire without digging into it, chosing Christianity seem obvious, but if you really get all the data on their options and then try to figure out why they did like they did, you’ll have no clue. Another example is online advertising, I read a news paper that I’ve never seen a single add for, and I see a lot of adds for news papers. I’m often called by news paper salesmen as well, but not for the one I read. This is because it doesn’t suit my elaborate online profile. My profile tells the add agencies what I should read, but it doesn’t tell them why, and the difference is failing them.<p>If we really want ML and big data to be truely useful, I think we need to learn from the social sciences, because they work much more with the complicated science behind the why.