The title of my talk was inspired by the news that last December Mark Zuckerberg has converted a 1950s toaster into a smart toaster which he can now get started by talking to Siri on his phone. The news struck me as very smart and also incredibly dumb. Is this what the internet of things has taken us to?
This paper seemed to hit a few buttons at the recent 2017 RANZ Conference held in Auckland, so I thought I’d post it up. I can’t see that facility here on the blog, but drop me a line and I can send the PPT to you. My email: Duncan Stuart
My PPTs usually don’t have notes, but on this one I’ve added the commentary which, on the day I more or less improvised, but the net result won a People’s Choice award which was very encouraging. There’s a pile of younger researchers snapping at my heels.
The central argument I wanted to get over is that we’re not simply suffering from data overload, we’re exposed to attack – not only from malicious hackers, (think Equifax) but also normal errors and…this is the challenge: smart algorithms that may get things 95% right but at the same time get things wrong for 5% of the time. There will be victims – and these may be small incidents (lost luggage moment) but also bigger and more systemic dramas. Algorithms and models are, after all, assumptions wrapped up in maths. As we move into a world of smart AI analytics there are going to be whole blocs of people, already disadvantaged, who will be increasingly disadvantaged as smart algorithms become self fulfilling. One example is the notorious US-based data-driven Stop & Frisk policing policy that targets young black males in crime hotspots. Keep picking on just this group and pretty soon they will dominate the crime statistics – mostly through sampling bias.
If AI and the use of big data are programmed to be mostly right, what forms of redress will there be for people who are wronged by these same algorithms? How much error is acceptable? We need to be thinking about that.