AI Dangers of Large Language Models
I recently listened to an eye-opening podcast episode, “The AI Dilemma,” by Aza Raskin and Tristan Harris, aired on March 24th, 2023, by the Center for Humane Technology. The conversation revolved around the current state of AI, highlighting the urgency to address safety concerns and the need to update our institutions for a post-AI world. Today, I, Fede Nolasco, will share my thoughts and insights from the podcast and shed light on the potential risks and benefits of large language models like GPT-4.
The debate aimed to give voice to AI safety experts who might not have the platform to speak out. The core question was: what will it take to get AI right? There might be only one chance to do it correctly, so we need to act quickly. Even AI researchers have expressed concerns, with 50% of them believing with a 10% or greater chance that humanity could go extinct if AI is not regulated properly.

In this article, we’ll explore the undesirable features of a dystopian society with AI:
Table of contents
Reality Collapse
Advanced AI systems could manipulate or distort our perception of reality by creating convincing deepfakes or altering digital information, leading to a breakdown in trust and undermining our ability to distinguish between fact and fiction.
A deepfake video of a world leader declaring war on another country could cause panic, leading to real-world consequences before the video is debunked as fake.
In 2018, a deepfake video of former US President Barack Obama, created by comedian Jordan Peele, went viral as a demonstration of the potential dangers of manipulated videos.
More recently, a fake image created by Decrypt of President Joe Biden generated using Midjourney v5.1. Find the related article with the image : AI Deepfakes Just Got Better With This Upgrade – Decrypt
Fake Everything
The advancement of AI technology makes it increasingly easier to create convincing fake content, including videos, images, and audio. This raises concerns about the spread of disinformation and trusting the authenticity of digital media.
AI-generated articles that spread false information about a disease outbreak could cause unnecessary fear and confusion, impacting public health responses.
In 2019, Facebook removed over 3 billion fake accounts in a six-month period, highlighting the scale of fake content creation and dissemination.
Jessica Cecil, Founder of Trusted News Initiative, writes about how ChatGPT has opened a new front in the fake news wars. Search engines with the latest ‘generative AI’ obscure the sources for their responses. The result is a breeding ground for disinformation.
Trust Collapse
As AI systems become more prevalent and powerful, there is a risk of misuse or manipulation, eroding trust in institutions and society as a whole.
If AI-generated fake news becomes widespread, people may lose trust in legitimate news sources, making it difficult to distinguish between reliable information.
According to the 2021 Edelman Trust Barometer, 57% of respondents believe that journalists and reporters are purposely trying to mislead people by saying things they know are false or gross exaggerations.
Collapse of Law and Contracts
In 2018, the DAO, a decentralized autonomous organization built on the Ethereum blockchain, experienced a hack due to a vulnerability in its smart contracts, resulting in a loss of over $50 million.
The use of AI systems to automate decision-making or create contracts could lead to legal disputes and breakdowns in the legal system.
AI-generated smart contracts may contain hidden clauses that unfairly favor one party, leading to legal disputes and a loss of trust in the contract system.
Automated Fake Religions
AI systems could be used to create automated religions, which could manipulate people and spread disinformation.
While there are no specific examples of AI-generated fake religions, the potential for AI to manipulate people’s beliefs can be seen in the increasing use of AI-generated content to influence public opinion on various topics.
Exponential Blackmail
In 2020, the FBI’s Internet Crime Complaint Center reported over 28,000 complaints related to extortion, with victims losing over $54 million.
AI systems could gather vast amounts of personal data, which could then be used to blackmail individuals or groups.
AI systems could identify and exploit sensitive information from someone’s social media profiles, using it to blackmail them into performing illegal activities or paying large sums of money.
According to ZDNET, These are the 20 most dangerous crimes that artificial intelligence will create.
Automated Cyberweapons
In 2010, the Stuxnet worm, which targeted Iranian nuclear facilities, was discovered. It was later revealed to be a joint cyber weapon developed by the US and Israel.
Advanced AI systems could create and deploy cyber weapons, which could disrupt critical infrastructure or launch cyberattacks.
AI-driven malware could autonomously target critical infrastructure, like power grids or transportation systems, causing widespread disruption and damage.
Automated Exploitation of Code
In 2017, the WannaCry ransomware attack exploited a vulnerability in Microsoft Windows, affecting over 200,000 computers across 150 countries and causing billions of dollars in damages.
AI systems could identify vulnerabilities in software and systems, which could then be exploited for malicious purposes. AI systems could identify a vulnerability in a popular software application, enabling hackers to exploit it and steal sensitive user data.
Automated Lobbying
AI systems could influence lawmakers and policymakers, potentially undermining the democratic process.
AI-driven bots could flood lawmakers’ social media and email accounts with messages supporting a particular policy or corporate interest, skewing public opinion and undermining democratic decision-making.
According to Schneider on Security – ChatGPT, has the potential to disrupt everyday communications like emails and college essays, but it also poses a significant threat to democratic processes, particularly lobbying. AI-powered chatbots could automatically compose comments for regulatory processes, write letters to the editor, or post millions of comments on news articles, blogs, and social media daily. They could also identify key legislators and influencers in the policymaking process, exploiting weak points through direct communication and public relations campaigns.
While AI could provide speed and flexibility, making lobbying more accessible to a wider audience, it may also be used by powerful institutions to gain more influence. Governments will have to adapt how they interact with lobbyists in the face of AI-driven lobbying techniques. AI lobbying may enhance the power of already influential institutions, although it remains unclear how this will shape the future of lobbying.
Biology Automation
In 2021, scientists used AI to predict the 3D structure of proteins, a breakthrough in biology with potential implications for the development of new drugs and medical treatments.
The use of AI systems to automate biological processes could have both positive and negative implications, including creating new diseases or developing new medical treatments.
AI-driven gene editing technologies could inadvertently create a new, highly infectious disease with devastating global consequences.
Exponential Scams
In 2021, the Federal Trade Commission reported that consumers lost over $5.8 billion to fraud, with a significant portion of these scams involving AI-generated content or automation
As AI systems become more sophisticated, there is a risk they may be used to perpetrate large-scale scams or frauds.
AI-generated phishing emails could become more sophisticated and personalized, increasing the likelihood of victims falling for scams and losing money or personal information.
More recently, according to the Guardian, Darktrace warns of rise in AI-enhanced scams since ChatGPT release.
A-Z Testing of Everything
In 2020, a study published in the British Medical Journal found that an AI system was able to diagnose breast cancer from mammograms with a higher degree of accuracy than human radiologists.
The use of AI systems to automate testing could have significant benefits, but also raises concerns about unintended consequences and the need for oversight.
Over reliance on AI-driven testing in drug development could lead to the approval of medications with harmful side effects, putting public health at risk.
More recently, according to MIT, AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
Synthetic Relationships
In 2020, an AI-powered chatbot named Replika gained widespread attention for its ability to engage users in deep, emotional conversations, leading to concerns about the potential impact on human relationships.
The use of AI systems to create synthetic relationships, such as virtual assistants or chatbots, raises concerns about emotional manipulation and the impact on human relationships.
People may become overly attached to AI-powered virtual companions, leading to social isolation and a decline in meaningful human relationships.
More recently, according to TIME, AI-Human Romances Are Flourishing—And This Is Just the Beginning.
AlphaPersuade
In 2018, researchers at the University of Warwick found that personalized, AI-generated political advertisements had a significant impact on voter behavior, potentially leading to manipulation and exploitation.
The use of advanced AI systems to create highly persuasive marketing or advertising campaigns could potentially manipulate or exploit individuals.
AI-generated advertisements could become so persuasive that they encourage unhealthy behaviors or spending habits, exploiting consumers’ vulnerabilities for profit.
As we continue to explore the world of AI, it’s essential to remain vigilant and address the potential dangers of large language models. Let’s work together to harness the power of AI for good while mitigating its risks.
If you have any thoughts or ideas about AI safety and regulation, feel free to contact me, Fede Nolasco, or share your suggestions for new articles on my blog, DataTunnel.
Resources
- AI Safety
- Voices: AI isn’t falling into the wrong hands – it’s being built by them
- AI Is Coming for Filmmaking: Here’s How – The Hollywood Reporter
- AI-generated deepfakes are moving faster than policy can : NPR
- ChatGPT has opened a new front in the fake news wars | Chatham House – International Affairs Think Tank
- AI Chatbots Have Been Used to Create Dozens of News Content Farms – Bloomberg
- ChatGPT is making up fake Guardian articles. Here’s how we’re responding | Chris Moran | The Guardian
- ‘Aims’: the software for hire that can control 30,000 fake online profiles | Technology | The Guardian
- Bing’s AI Is Threatening Users. That’s No Laughing Matter | Time
- AI and Political Lobbying
- Big Tech lobbying on AI regulation as industry races to harness ChatGPT popularity
- Risks and remedies for artificial intelligence in health care
- Is there a place for biologists in the age of automation?
- Darktrace warns of rise in AI-enhanced scams since ChatGPT release
- AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work
- AI-Human Romances Are Flourishing—And This Is Just the Beginning
- Replika in the Metaverse: the moral problem with empathy in ‘It from Bit’
- AI with a Human Face
- SYNTHETIC HUMANS: DIGITAL TWINS LIVING AND BREATHING ONLINE