Published in the San Diego Union-Tribune, January 6, 2020
In January, every columnist worth his digital fingers likes to offer predictions for the coming year, but I’m going to resist falling into that trap.
Instead, I want to share a fascinating paper from two professors — Barbara Mellers, of the Wharton School of Business, and Ville Satopaa, of INSEAD, a business graduate school in Europe. They have studied the process of predicting outcomes and have developed “Bias, Information, Noise: the BIN model of Forecasting.”
The goal of this kind of thinking is to improve your decision-making. After all, predictions are interesting, but only really valuable if they are accurate. Mellers says, “In the real world we are not rational consumers of information,” since we are afflicted with bias and noise. “Bias is a systematic error,” meaning if you have enough examples, if you know enough about the behavior, if you see enough patterns, then you can account for or discount the decision derived from the information and adjust your response.
Noise, however, is a different cat. It is random, it comes and goes and thus makes it very difficult to rely firmly on the prediction. For example, you have a head cold that day and that affects your judgment. Malcolm Gladwell in his book “Talking to Strangers” tells of a Chicago judge who made his decisions about bail “by looking into the eyes of the defendant.” In addition, factor in decisions before lunch or after and you have some major noise and unpredictability.
Mellers and Satopaa argue that the way to eliminate noise is to take the human out of the loop. Hello, algorithm. But to train the algorithm, you need to use machine learning, and that kind of learning comes from crunching thousands of examples to look for a pattern.
The puzzle is that making decisions where you don’t have much prior art or history to look at means that the initial algorithm might not be much better than a coin flip. It cannot account for a “black swan” type event.
But the two professors do offer some hope. They have learned that noise can be greatly reduced when you work in teams, not alone. In some ways this is obvious — a good CEO listens to his team — but at the same time, some of the tough calls are made alone or in contradiction to the consensus.
The reason I am fascinated by this study is that whether you are the entrepreneur or the investor, in the final analysis, you have to “pick” — whether to invest or what kind of product to build or what market to go after — you have to make a decision, and this decision is always being made without complete information, infected with bias and noise. An algorithm might provide insight, but the “bold move” from a leader is probably only a choice on the fringe since it is not a common occurrence.
My nature is inclined to push all the chips into the center of the table and play for the ace on the river, and I know that those decisions are filled with bias and noise, but in that instant, they may turn out to be exactly right. They call it the Hail Mary (Roger Staubach) for a reason.
The challenge for our CEO is that good decision-making is not based on a single opinion (eyewitnesses at a crime scene are notoriously inaccurate and unreliable), but rather should be developed from a team of experts seeking a consensus.
But groupthink is not the best way to innovate, while it might well be the best way to execute the plan. You can see the puzzle. Forecasting is not the same as creating. Averaging may be useful for grading on the curve, but in the final analysis, you have to throw the ball to someone, and you can’t rely on the wisdom of the crowd.
I love machine learning. My little software company relies on it, but when you don’t know what you don’t know, and there is no history to rely on, then I want to bet on the guy whose “picking record” is the best, complete with bias and noise.
If you want to be a weather forecaster, the BIN will work out great. If you want to change the world and risk failing, then let it rain.
Rule No. 641
Sunny and overcast — at the same time.