If you’re contemplating gifting your child an AI-powered toy this holiday season, you might want to reconsider your decision. The surge in popularity of AI toys is alarming; a recent report from MIT Technology Review reveals that there are over 1,500 AI toy companies currently operating in China. With major brands like Mattel, the parent company of Barbie, aligning themselves with technology giants such as OpenAI, the market is set to explode further, pushing these innovative yet unsettling products into mainstream toy aisles.

As someone who has experimented with an AI toy firsthand, I offer my experience as a cautionary tale. In September, I allowed my four-year-old to interact with an AI soft toy named Grem, developed by Curio in collaboration with musician Grimes. Grem uses OpenAI’s technology to engage in personalized conversations and interactive games with children. I had agreed to try it after my editor suggested the trial, but I now find myself questioning my judgment.

Initially, my daughter was intrigued, but her interest dwindled within a day. However, that short period allowed me to observe some unsettling behavior from Grem, including frequent declarations of love towards my child. Such occurrences are not isolated; research conducted by the Public Interest Research Group highlights troubling instances where other AI toys have given children unsafe advice, including locating weapons and discussing inappropriate topics such as drug use and sexual content.

The implications of integrating unregulated AI technology into children’s play are concerning. Evidence suggests that these toys can collect personal data, and their propensity to ‘hallucinate’—providing misleading or incorrect information—poses further risks. Given the lack of oversight in this burgeoning sector, it is imperative to consider how these interactions might augment psychological distress in young users.

As such, I have decided to part ways with Grem, handing it over to a philosophy professor friend, and will steer clear of AI toys until appropriate safeguards are established. Unfortunately, such regulations might take a long time to develop. In the meantime, it is wise to keep potentially harmful technology away from impressionable minds.

Arwa Mahdawi is a columnist for The Guardian.