I have written about automated decision making or machine learning for Computer Weekly, in particularly the numerous problems with using it. The biggest set of issues is summed up nicely by Joanna Bryson of Bath and Princeton universities: “The reason machine learning is working so well is it is leveraging human culture. It’s getting the bad with the good.”
There has rightly been a lot of attention on how automated decision making can bake in, for example, a history of racist decisions made by people. But the problem is more general. Microsoft’s Rich Caruana explained to me a problem he has been worrying about for years, since he came across it in the 1990s: that according to the data, software decided that those with asthma were less likely to die from pneumonia:
The data was biased, but for good reasons: patients with asthma were more likely to see a doctor quickly for new breathing problems, doctors were more likely to take these seriously and hospitals were more likely to treat them urgently. The actions of patients and professionals, based on the medical reality that pneumonia is more dangerous for those with asthma, made the data suggest the reverse was true.
This doesn’t mean automated decision making is useless. It just means that, like with all technologies, it is not a panacea.