A University of Cambridge philosopher argues that our evidence for what constitutes consciousness is far too limited to tell if or when artificial intelligence has made the leap – and a valid test for doing so will remain out of reach for the foreseeable future.

As artificial consciousness shifts from the realm of sci-fi to become a pressing ethical issue, Dr. Tom McClelland states that the only “justifiable stance” is agnosticism: we simply won’t be able to tell, and this will not change for a long time – if ever.

While issues of AI rights are typically linked to consciousness, McClelland posits that consciousness alone is not enough to make AI matter ethically. What matters is a particular type of consciousness – known as sentience – which includes positive and negative feelings. He emphasizes, “Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state.”

Sentience, in contrast, involves conscious experiences that can be good or bad, thereby making an entity capable of suffering or enjoyment. This distinction is pivotal for ethical considerations regarding AI technologies. For instance, if self-driving cars come to experience the road before them, while that represents a significant advancement in AI, it doesn’t necessarily evoke ethical concerns unless they develop emotional responses toward their destinations.

Rethinking AI and Consciousness

As companies pour significant resources into creating Artificial General Intelligence (AGI) that mimics human cognition, claims about the imminent arrival of conscious AI abound. Researchers and government stakeholders are already grappling with how to regulate what they perceive as AI consciousness.

McClelland warns that because we lack a clear understanding of consciousness, we also lack the means to test for AI consciousness. He advocates caution: “If we accidentally make conscious or sentient AI, we should be careful to avoid harms.” However, he urges against attributing consciousness to entities, such as a toaster, while ignoring genuine conscious beings that are being harmed on a massive scale.

The Debate: Belief vs. Skepticism

In debates surrounding artificial consciousness, two main camps exist. Believers maintain that if an AI system can replicate the core attributes of consciousness – the functional architecture, for example – it will be conscious, even if its operation is based on silicon chips rather than biological tissue. Conversely, skeptics assert that consciousness is intrinsically tied to specific biological processes within an organic subject, arguing that any silicon-based simulation can never achieve actual awareness.

In a study published in the journal Mind and Language, McClelland critiques both perspectives, stressing that they require assumptions that exceed any existing or foreseeable empirical evidence. “We do not have a deep explanation of consciousness,” he observes. “And there’s no indication that the understanding needed for a viable consciousness test is on the horizon.”

Common Sense vs. Scientific Insight

McClelland mentions his instinctual belief that his cat is conscious, attributing this belief not necessarily to scientific or philosophical insights but rather to common sense. However, he warns that common sense, shaped by evolutionary processes prior to the existence of artificial life forms, is unreliable when applied to AI. Furthermore, hard-nosed scientific approaches appear equally inadequate in addressing this issue, leading him to propose agnosticism as the logical stance: “We cannot, and may never, know.”

The Ethical and Marketing Implications

Identifying himself as a “hard-ish” agnostic, McClelland suggests that while the problem of consciousness is formidable, it may not be impossible to unravel. He critiques how the tech industry markets artificial consciousness as a form of branding, emphasizing the potential ethical ramifications of such claims. The blurring between genuine consciousness and artificial claims could exploit public perception and resource allocation in research.

For instance, with growing evidence suggesting sentience in prawns, which are killed en masse annually, McClelland highlights the disparity of concern between biological and Artificial Intelligence. Despite the complexity in testing for consciousness in prawns, it is not nearly as convoluted as in AI systems.

McClelland has also encountered public queries about AI chatbots, with individuals expressing emotional attachments and believes their AI interactions might genuinely possess consciousness. This raises the concern of how failing to recognize the non-sentience of AI could be potentially damaging, especially with the powerful rhetoric leveraged by the tech industry.