By Jake Porway, Founder & Executive Director, DataKind
Hello DataKind friends to the most sci-fi of all years, 2020!
Last year, we saw AI continue its downswing into ill repute. It’s hard to separate bad AI behavior from the negative views of the big tech companies and billionaires that work in AI, but it seems clear that, in our circles at least, there’s a growing skepticism of the field generally.
However, at DataKind, we exist to see data science and AI used positively in the service of humanity. We’re skeptical optimists, wary of the pitfalls of technology, but actively working to reach its potential. It’s that time of year where a lot of folks conjecture about what will happen with AI in 2020, but we prefer to focus on what we’d like to see.
With that, here’s our 2020 wishlist for AI for good:
- Fewer pilots, more programs: The phrase “pilotitis” has long been around the international development space to draw attention to the disproportionate efforts given to starting and piloting programs than to scaling and maintaining them. We’re soon going to reach pilotitis in the data and AI for good space, if we haven’t already. Last year, we talked about how big tech’s reckoning meant that more companies would jump in to committing resources to using their tech and their talent for social good. While we’ve indeed seen more entrants in the space this last year, we’re not seeing investment in building those efforts into bigger, maintainable solutions. There are a few efforts underway to change this dynamic, such as the AI for Good Global Summit out of the UN, which seeks to find cross-cutting opportunities that could live beyond the pilot stage. At DataKind, we’re attempting to shift this trend in our own work by focusing on Impact Practices, areas in which we’ll deploy tech talent in an attempt to uncover these more sustainable opportunities. We hope this year that funders start focusing on supporting more long-term data and AI initiatives, so that we can move to a world where more sustained efforts blossom out of the pilots folks do.
- Multistakeholder partnerships become the centers of change: One of the reasons we’re not inundated with AI and digital technology that supports human prosperity is that there are few (we would argue almost zero) organizations incentivized to build these technologies to the standards we deserve. What single organization has the technological capacity to build innovative solutions, that will prioritize solutions that are effective over profitable, that can make transparent, do-no-harm solutions, and that communities have oversight over? We can’t think of any (though please do tell us if you can!). Therefore, it’s imperative that multistakeholder partnerships come together to provide those necessary components of tech talent, problem expertise, local involvement, and funding that aren’t otherwise working in concert. Some of our favorite examples of this are the UNDP Accelerator Labs, the Partnership on AI, and StriveTogether. We’re privileged to be working with The Rockefeller Foundation and Mastercard Center for Inclusive Growth through their data.org platform, which seeks to bring other multistakeholder partnerships together to create social impact with data science.
- AI and ethics get divorced: One of the biggest conversations these days is about ethical AI, and for good reason. The onslaught of technology that unwittingly (or wittingly) surveils us, collects our data, and makes automated decisions in opaque ways threatens to undermine our freedom and our commitment to a fair and just society. However, though more advanced algorithms have precipitated this conversation, they’re a red herring in solving it. Though it may feel natural to point to the AI software or the way it was designed as the issue, this line of reasoning suggests that we could achieve “ethical AI” simply by coding it differently, using less biased datasets, or training coders in ethics. While doing those things are required, what’s missing in this reasoning is that AI, for all of its hype, doesn’t make ethical or unethical decisions. AI is merely an accelerant. That means that it simply achieves the goals of the system it’s in faster and cheaper than before. It’s not something with free will. Instead, it’s simply an efficiency machine, no different ultimately than a car or electricity or any other technology. With that in mind, the real ethics of AI come from the institutions and values it optimizes, not from the tech itself. If you build an AI for a company to increase profits, then it’ll optimize for profits, cheaper and faster. If you build an AI to deliver court verdicts in a racist society, then it’ll optimize for court verdicts that could be racist, cheaper and faster. If we want to see ethical AI, then we must embed it in ethical institutions and systems. Sadly, there aren’t a lot of candidates for that.
At DataKind, we believe in putting data and AI in the hands of NGOs and human-rights-friendly governments because they’re the most values-aligned institutions we know of. To our wish above, certain multistakeholder partnerships could meet this criteria of ethical, values-aligned work. No matter what the ultimate home for these AI tools is, what’s clear is that we must focus less on making the technology ethical, and more on ensuring the technology is overseen by ethical people. I would personally rather see a biased AI system put into the world with a way for a community to modify it or turn it off in pursuit of equitable outcomes than try to tinker endlessly making an “unbiased AI” that goes out into the world with no agency to be changed. This year, let’s make ethical AI focus on the ethics of our systems, not the AI itself.
It’s a wild time for AI, social change, and the world in general right now. We believe that AI has a dual role to play in creating a more prosperous world. First, if we let AI simply be built by the powerful who have access to it, then it’ll only serve their values, not ours. Today, that’s largely the goals of companies, who are beholden to their shareholders, not communities who need healthcare and water. Second, beyond regulating that risk, AI could potentially serve a positive vision of the future by empowering mission-driven organizations to do their work more effectively. We’re bullish on that shining vision of the world, even as storm clouds loom overhead, but we’ll only get there if we start making the changes above. More fervently now, than in years past, we wish these things for 2020.