Ray Dalio Has an Unbelievable Algorithm
Humans create algorithms, so algorithms can perpetuate human biases. As obvious as this statement might seem, some pretty smart people still don't get it.
The latest example arrived last week, in a Wall Street Journal article on a relationship at Bridgewater, the world's biggest hedge fund, involving senior executive Greg Jensen and an unnamed female subordinate. The Journal, citing "people familiar with the matter," reported that Bridgewater founder Ray Dalio, in parsing statements from the two, placed more weight on Jensen's version of events because "Jensen’s overall believability had long been ranked particularly highly in Bridgewater’s rating metrics." The woman eventually received a settlement of more than $1 million and was compelled to leave the firm. Jensen remains at Bridgewater.
In handling the incident, Dalio relied on an algorithm of sorts -- Bridgewater's famed “radical transparency,” in which most meetings are recorded and employees are ranked based on their ratings of one another. He's urging other companies to follow his example, and recently expanded his set of "Principles" into a 600-page bestselling book. He has even suggested that he wants his artificial intelligence system to replace him someday.
I witnessed firsthand Dalio's confidence in his approach last March, when I was in New York practicing a TED talk. My message was that we should stop putting so much faith in allegedly objective algorithms. Dalio, who was in the audience, suggested that I’d gotten it all wrong -- that algorithms are good when they reflect the minds of their creators, and bad when they don’t. I demurred: What if the creators are wrong? What if the data scientists are going on incomplete information, or biased data? That happens all the time, as he doubtless knows.
Unfounded trust in algorithms can be seen in college rankings, teacher assessments and, more recently, lethal autonomous weapons. It's most common among people who don’t know math and are intimidated by anything technological, complicated and sophisticated. Which makes Dalio’s situation surprising, because he understands this stuff.
I saw the ease with which seemingly pure data could be biased when I worked at the hedge fund D.E. Shaw, where I created algorithms designed to make money in futures markets. In my group, we would periodically rate each other’s ideas along various metrics, such as their chances of working, how many countries they might work in, how long they might work and how much money they might make. The idea was to come up with an objective “expected profit” for each idea, based solely on its merits. If the score was high enough, it would be assigned to a quant to develop.
Yet certain people's ideas inevitably carried added weight. Some proposals came from junior quants, while others came from the likes of Larry Summers, the former Treasury Secretary who worked at the fund in the 2000s. The ideas of men in positions of power naturally elicited higher marks: Maybe people gave them the benefit of the doubt because of the language they used, their reputation or even their body language. In any case, implicit bias that favored the alpha male was a real thing.
I’d suggest the same thing is happening in the Bridgewater case. It wasn’t just anyone being judged: It was Dalio’s right-hand man, the executive being groomed to eventually replace him. Nobody should expect Jensen's “believability” score to be objective, at least when compared to a junior woman. And nobody should consider it relevant outside the business environment in which it was generated. Yet -- at least according to the Journal -- it was used to assess his probity in a deeply personal matter.
From what I've seen, I wouldn’t place too much value on the system that Dalio says has made him very rich. Just because it's based on data doesn’t mean it can’t be an echo chamber.
To contact the editor responsible for this story:
Mark Whitehouse at firstname.lastname@example.org