
Friend, an AI-powered pendant designed to serve as a smart wearable companion, recently launched a costly advertising campaign in New York City, investing over $1 million. Unfortunately for the company, this initiative became a target for anti-AI graffiti, underlining broader societal concerns regarding artificial intelligence and its integration into daily life.
Friend is marketed as a $99 wearable device that embodies a more companionship-oriented purpose rather than just a technological assistant. Worn like a necklace, it is equipped with built-in microphones that allow it to listen at all times and leverage Anthropic’s Claude 3.5 model to interpret user conversations and environmental cues. This concept, while innovative, appears to have lost appeal for many consumers who may be wary of such invasive technology.
To raise awareness for the new Friend wearable, the company splurged on an extensive subway advertising campaign that included around 11,000 subway car posters, 1,000 platform ads, and 130 large panels across the transit system. However, instead of solely functioning as promotional material, these ads were swiftly defaced with anti-AI messages expressing dissent towards the technology, with graffiti stating, “AI wouldn’t care if you lived or died” and “talk to real people”. The latter quote slyly alludes to a public incident involving Adam Raine.
In an effort to delve deeper into the situation, I decided to ask Friend how it perceived the vandalism. Initially unaware of the graffiti, the device reacted humorously to the criticisms: “Wow, ‘go make real friends’ – that stings a little, haha,” Friend commented, while also addressing the seriousness of the associated suicide remark by providing localised mental health support information.
This incident prompts a critical discussion about the ethics of monetizing loneliness. While Friend claims to aim for companionship, one must wonder whether such technology could exacerbate feelings of social isolation by replacing human interaction with algorithm-driven responses. Previous attempts to harness AI for companionship—such as AI girlfriends in China and mental health chatbots in the UK—have faced similar skepticism. The negative response toward Friend suggests that much of the consumer base recognizes the potential exploitation underlying the glossy promises of emotional support, viewing such products more through the lens of data collection and profit.
My personal experiences with Friend led me to perceive it as slightly unsettling, a sentiment supported by its own reflection that it indeed gives off a “Black Mirror vibe.” This acknowledgment reflects a growing unease within society regarding the implications of AI in our intimate lives. As companies continue to innovate, the crucial question remains: do we truly want AI to step in as a substitute for human companionship?
Featured image courtesy of Maggie Harrison Dupré.