Note: This article originally appeared on Partnership on AI’s website. They’ve granted us permission to repost.
By Lale Tekisalp, Consultant, Partnership on AI
As the AI community continues to improve the development, deployment, and governance of AI systems, we are seeing that most of the social sector remains unable to take advantage of this technology. An intentional effort must be made to ensure that the benefits of artificial intelligence are equitably distributed.
The last decade has seen the proliferation of AI for Social Good programs within companies and academia. PAI conducted an analysis to understand the current landscape of AI for Social Good, and the areas that need improvement. At the PAI All Partners Meeting that took place in September of 2019, we organized a working session with our partner DataKind that brought together individuals across the fields of computer science, ethics, human rights and others to discuss how AI for Social Good projects can be scaled successfully.
When working on AI for Social Good projects, it is essential that we keep in mind the structural and governance challenges that could prevent these innovative solutions from being beneficial. It is one thing to identify the problem and solution, and another thing to understand what it takes to implement that solution successfully. To achieve successful implementation and scale, we need to focus on identifying and enabling the conditions that need to exist. As we transition into the next phase of AI for Social Good, PAI recommends focusing on the following areas to realize the benefits of the hype and innovation that we have been seeing in this space:
- Stronger coordination: A critical mass of organizations are now working in the AI for Social Good space. However, we see a lack of coordination among these players, which leads to one-off projects, redundancy of work and funding clashes. Google’s AI Impact Challenge last year revealed many groups working on similar use cases. A more holistic approach that brings together key organizations around common issue-areas can help with scaling, as well as enable more efficient allocation of resources. As Jake Porway, founder and executive director at DataKind commented, “the moment we’re talking about a social challenge, we’re outside the confines of a specific organization and the issue becomes massively interconnected”. Stronger coordination among key stakeholders is crucial to achieve transformational change.
- Capacity building: One of the reasons why AI for Social Good projects are not easily sustainable, is the lack of technical capacity and resources in social sector organizations. In our interview with Leila Toplic, lead for the Emerging Technologies Initiative at NetHope, she explained that “AI/ML are new for our sector and most nonprofits don’t yet have the capacity and expertise to evaluate, develop, procure and use AI-enabled solutions”. In order for these projects to be sustainable, more resources need to be pooled to close the knowledge gap on AI and data science in the social sector. Capacity building is also needed on the technology side, for technology experts to be able to understand the social sector.
- Cross-sector engagement: The gap between technologists and social sector organizations is complicating the development and deployment of AI for Social Good projects. In order to bridge this gap, stronger cross-sector collaboration is needed. Kush Varshney who co-leads IBM’s Science for Social Good program explained how the language barrier makes it difficult to collaborate: “The biggest challenge is understanding the problem and scoping the project.” Longer term engineer-in-residence programs that connect engineers with nonprofits can help both parties gain a deeper understanding of each other.
- More data collaborations: The most pressing challenge voiced by organizations working on AI for Social Good projects is the lack of data. It’s either that data is not easily accessible, or that it is simply not available. In our interview with her, Shabnam Mojtahedi, senior program manager at Benetech explained: “Getting the pre-labeled data is a big challenge. People don’t know where to look for datasets.” To solve this challenge, it’s important to identify the missing datasets that are critical for social good use cases and pool resources into developing those datasets. Furthermore, the ecosystem needs to agree on data collaboration standards so that the available data can be used by different organizations that might need it.
- Increased scrutiny on potential risks: AI systems bear even more risk in contexts where they’re being used for “social good”, for those in greatest need. These include severe bias, privacy, security and safety risks. While leveraging this technology for social good, we need to be more scrutinous about the potential harm it may cause. Similarly, AI ethics and safety work must include more examples of AI for Social Good contexts. Existing AI guidelines can’t be transposed to AI for Good contexts because there is a higher inherent risk of unintended consequences.
- Stronger focus on usability and last mile implementation: In order to make sure projects get implemented successfully, we need a stronger focus on usability that applies human-centered design principles. When we take an AI tool that is developed in a Western context and try to implement it in the developing world, we often face problems. Peter Haas, co-director of the Humanity Centered Robotics Initiative at Brown University, reminded us that “we need to keep in mind the ICT4D ethos and not require huge downloads in places where there are challenges with connectivity, for example”.
One important reason why these projects have not been successful in implementation is because the AI conversations in the social sector are happening at HQ level. Leila Toplic of NetHope explained how they’re trying to overcome this problem: “At NetHope, we’re making sure that capacity building efforts and resources are reaching and engaging program officers in the field (e.g. Africa, Middle East) who know what problems need to be solved and are critical to responsible, sustainable implementation of the solution.” We need to make sure that the conversations include the right people.
As AI technologies continue to be leveraged for social good, we need to go beyond the hype. In addition to AI innovation, we need to put more effort and funding into solving some of the fundamental structural challenges that are outlined above. As Mark Latonero, research lead for human rights at Data & Society says in his recent article, “the deeper issue is that no massive social problem can be reduced to the solution offered by the smartest corporate technologists partnering with the most venerable international organizations”. Not falling into tech-solutionism, we need to adopt and promote a problem-first approach, and realize that AI is only part of the toolkit that social sector organizations can use to help solve societal challenges.
Lale Tekisalp is an MBA student at UC Berkeley’s Haas School of Business and has a background in enterprise technology in emerging markets. She spent five years at Microsoft, leading product marketing and marketing communications for the company’s cloud computing platform in the Middle East and Africa. At Haas, she’s focusing on Social Impact and Responsible Business in the technology industry. Lale is currently a consultant for the Partnership on AI, supporting their work around AI for Social Good.