In late 2020, during the height of the COVID-19 pandemic, a pioneering initiative took shape in Togo aimed at combating poverty through the use of artificial intelligence (AI). The effort, known as Novissi, meaning ‘solidarity’ in Éwé, enabled tens of thousands of impoverished villagers in a narrow strip of land in West Africa to receive approximately US$10 every two weeks directly into their mobile money accounts. While this amount may seem modest, it plays a critical role in helping families avoid hunger.
Traditionally, poverty alleviation projects rely on extensive in-person surveys for data collection—an approach rendered impractical during the pandemic. However, the project led by Cina Lawson, Togo’s Minister of Digital Economy and Transformation, along with researchers from the University of California, Berkeley, and the NGO GiveDirectly, leveraged AI. Utilizing satellite imagery and mobile phone network data, they were able to assess the wealth of regions and individuals swiftly. Lawson emphasized the necessity for a ‘surgical approach’ in identifying those in dire need, marking a significant advancement in the application of AI in anti-poverty strategies.
According to the World Bank, nearly 700 million individuals worldwide live on less than $2.15 a day, classifying them as being in extreme poverty. The goal to eradicate such poverty aligns closely with the United Nations’ Sustainable Development Goals, underscoring the urgent need to effectively measure poverty and ascertain the specific requirements of those in need. Traditional methods have often been fraught with challenges related to time and expense, prompting the exploration of AI as a potential solution.
AI’s capability to bypass the constraints of outdated data sources has proven invaluable. Joshua Blumenstock, a computer scientist from UC Berkeley, noted the interest and debate surrounding this application of technology. AI can process data quickly and encompass a broader demographic than conventional household surveys, identifying trends that may elude even seasoned analysts. Furthermore, AI tools may enhance the evaluation of anti-poverty programs, allowing organizations to measure effectiveness in areas like health, education, and infrastructure—a sentiment echoed in the World Bank’s reports advocating for the integration of machine learning in addressing data shortages.
Despite the promise of AI in poverty alleviation, researchers like Ola Hall from Lund University caution about its inherent risks. AI often embodies biases that could disadvantage those without digital footprints, rendering it potentially ineffective for the most marginalized. Hall’s concerns suggest that while AI technologies pose innovative solutions, their accuracy in determining eligibility for aid could fall short of what is required.
Yet, as Ariel BenYishay, a development economist, points out, the prevailing methods of assessing poverty are equally flawed. He argues that existing systems often rely on outdated and inaccurate data, implying that even an imperfect AI model could be an improvement.
The evolution of poverty measurement is not a new endeavor. Historic efforts began with British social reformer Charles Booth, who from 1886 to 1903 meticulously gathered data on the income levels and social classes within London, translating his findings into a color-coded map. Similarly, Seebohm Rowntree’s 1901 study in York utilized interviews to determine poverty levels based on basic nutritional requirements.
In 1964, during the ‘War on Poverty’ declared by President Lyndon Johnson, the Office of Economic Opportunity adopted the poverty threshold concept developed by Mollie Orshansky. This definition emphasized the minimal income necessary to fulfill basic needs of food, shelter, and other costs. Despite differences in regional economic conditions, most definitions remained reliant on the dollar-per-day metric.
Economist Sabina Alkire advocates a multidimensional view of poverty. In the early 2000s, she championed a new index called the Multidimensional Poverty Index (MPI) along with James Foster, which accounts for various deprivations through multiple indicators like nutrition and school attendance. The MPI represents a significant shift away from singular monetary measures to a more comprehensive understanding of poverty’s impact.
Many definitions of poverty have been proposed, each reflecting varying underlying motives and available data. Research from Jennifer Davis and others indicates considerable differences in household rankings across regions, highlighting the lack of consensus in poverty measurement practices, particularly for those most in need. This disparity is compounded by the time-consuming nature of traditional household surveys, often criticized for their limited scope and reliance on outdated paradigms.
Marshall Burke, an economist, underscores the transformative potential of AI in poverty research. Burdened with the traditional data collection challenges during his fieldwork in East Africa, Burke and his colleagues at Stanford University began exploring how emerging technologies could redefine poverty assessment methodologies. Collaborating with experts in remote sensing and AI, they focused on leveraging the wealth of information available from satellite images.
By examining features associated with wealth through night-time lighting and other metrics, Burke’s team developed algorithms capable of predicting poverty levels with impressive accuracy. Their 2016 findings revealed a strong correlation between AI analyses of satellite imagery and conventional poverty measures, showcasing a significant reduction in effort and cost associated with data acquisition.
As interest in applying AI to this sector expands, so too does the experimentation within the academic community. While there is a need for caution, responsibly incorporated AI has the potential to unlock new pathways in identifying and addressing poverty on a global scale, particularly when traditional methods fall short. Such technological approaches may not only deepen our understanding of poverty but also prompt a reevaluation of how we measure and respond to it.