As Artificial Intelligence (AI) continues to permeate various aspects of society, understanding its applications and limitations becomes increasingly crucial. This was the focus of a recent event at Saint Michael’s College, where first-year students who had read the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference, co-authored by Dr. Arvind Narayanan from Princeton University, had the opportunity to engage with the expert directly.

Addressing Misconceptions of AI

Professor Kristin Dykstra, director of the first-year seminar program, commenced the event by highlighting the significance of Narayanan’s work in clarifying the potential and pitfalls of emerging technologies. Narayanan’s book and adjoining talk aimed to demystify AI, a technology often misunderstood by the public. Narayanan emphasized that increasing awareness about the capacities and limitations of AI is necessary to avoid the potential dangers of this misunderstanding.

The Risks of Predictive AI

In his presentation, Narayanan notably steered clear of discussions surrounding popular AI applications like ChatGPT or self-driving vehicles, instead opting to spotlight a pressing issue: the usage of AI in hiring practices. He recounted his investigation into AI-driven interview software that assigns numerical personality scores based on candidates’ 30-second video responses. This led to drastic variations in scores based on superficial changes by journalists testing the technology, reiterating Narayanan’s assertion that these systems are inherently flawed and often rely on biased data.

Limitations in Critical Uses of AI

Drawing attention to serious implications, he mentioned a ProPublica investigation revealing that predictive AI used by criminal justice systems had significant racial biases and flaws. Narayanan lamented society’s acceptance of potentially life-altering AI judgments that lack the accuracy to predict human behavior, stressing the ethical ramifications of such technologies in high-stakes decision-making.

A Cautious Optimism for Generative AI

On a more optimistic note, Narayanan discussed generative AI and its potential for innovation, such as creating engaging learning games for children. However, he warned that generative technologies can also lead to negative outcomes, including the marketing of misleading content. This highlights the ongoing necessity for users to approach AI-generated information discerningly.

Ethics and Education in AI Usage

Central to Narayanan’s message was the assertion that AI itself is morally neutral; rather, the ethical implications stem from its application. He shared a metaphor comparing the use of AI shortcuts in education to using a forklift to transport weights meant for building strength. The essence of learning, Narayanan argued, should not be overlooked in favor of convenience.

The Future of AI in the Workforce

Concluding the discussion, Narayanan expressed an optimistic outlook about AI’s role in transforming the workforce. He suggested that, while certain jobs may become automated, new opportunities will also arise, altering the nature of work rather than eliminating it. Drawing parallels to the introduction of ATMs, he predicted that the evolution of job descriptions would allow humans to focus on tasks that cannot be automated.