The Vatican released a significant document titled Antiqua et Nova on January 28, 2025. This ‘Note on the relationship between artificial intelligence and human intelligence’ is the culmination of reflections from the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education. The document aims to guide not only those responsible for transmitting the Catholic faith but also all stakeholders who advocate for the alignment of technological advancements with the common good.
Composed of 117 paragraphs, the document outlines both the remarkable opportunities and the severe risks presented by AI across various sectors such as education, economy, labor, healthcare, and international relations, including warfare. In particular, it warns that AI could escalate military capabilities beyond human control, potentially igniting an arms race detrimental to human rights and global stability.
The text elaborates on the distinction between AI and human intelligence, referencing Pope Francis’s perspective that associating AI directly with intelligence can be misleading. It reinforces the point that AI should be regarded as a product of human intellect rather than an artificial counterpart to it. The document notes that while AI holds the potential for innovative developments, it also has the capacity to exacerbate issues such as discrimination and social inequality.
Addressing the realm of warfare, the document expresses grave ethical concerns regarding autonomous weapon systems that operate without direct human oversight. It echoes Pope Francis’s calls for a prohibition on such technologies, emphasizing their existential risks and destructive potential against civilians.
On the subject of human interaction, the document warns that reliance on AI might result in harmful isolation. It rightly signifies the ethical implications of personifying AI, which may complicate children’s development and mislead individuals in human relationships.
In the economic sphere, while AI could enhance productivity, the document cautions that it might also depersonalize work, subjecting individuals to automated oversight, and confining them to monotonous tasks. This could provoke destabilization in the workforce if workers are not equipped to adapt to the evolving landscape.
In healthcare, the Vatican acknowledges AI’s potential to revolutionize patient care but warns against its capacity to disrupt the critical doctor-patient relationship, which could increase feelings of loneliness among patients. Moreover, it points out the risk of AI exacerbating existing disparities in healthcare access, potentially creating a model that favors the wealthy.
While AI offers remarkable possibilities for enhancing educational services and providing instant feedback, the document highlights issues such as programming outputs that might stifle students’ critical thinking skills and the dangers of biased information. Misuse of AI for misinformation or deceit in education is cited as a moral concern.
The dangers AI poses regarding the spread of misinformation and fabricated content are stressed, warning that vigilance is essential in maintaining integrity in information-sharing processes. The document draws attention to the shared responsibility of individuals involved in generating and circulating AI content.
On privacy, it is noted that AI might compromise individual privacy and conscience, calling for ethical oversight in how data is utilized by AI systems. Concerns regarding digital surveillance and its potential misuse in controlling belief systems and personal expression are also raised.
Lastly, the Vatican’s document underscores the environmental impact of AI technologies, warning that despite AI’s positive applications for environmental stewardship, its resource-intensive nature elevates concerns regarding CO2 emissions and resource allocation.
Overall, the Note concludes with a cautionary reminder against permitting humanity to become “enslaved” by its innovations, reinforcing that AI must serve to enhance, not to replace, human decision-making and intelligence.