Limited intelligence on artificial intelligence

The start of September traditionally marks the end of the media’s silly season, when daft stories proliferate. Hopefully this means less coverage of how artificial intelligence is going to pinch our jobs and bring about the apocalypse, presumably in that order.

To pick unfairly on one publication, last week’s Sunday Times magazine had a piece along these lines, starting as usual with autonomous road vehicles. There’s a lot of interesting work going on here (as I covered for Computer Weekly last autumn), but it’s worth bearing in mind that equivalents already exist on rail, on water and in the air and they lack neither problems nor in most cases human supervisors.

Artificial decision-making overall faces some big problems, as I examined in my most recent Computer Weekly piece. To its credit the Sunday Times magazine had Trevor Phillips discussing one: its tendency to make biased decisions on race or gender. He blames a surfeit of white male programmers, but a bigger issue may be the use of archives of biased human decisions to train systems. Then there’s Rich Caruana’s example of how data shows that those with asthma shouldn’t get hospital treatment for pneumonia.

For now, I’d say the greatest danger is from those implementing artificial intelligence failing to see its flaws – although admittedly this story doesn’t lend itself to illustration by stills from The Terminator.

This is from my monthly newsletter, which you can see in full here. Sign up below.