I recently listened to an excellent podcast episode, “The AI Dilemma,” by Aza Raskin and Tristan Harris, aired on March 24th, 2023, by the Center for Humane Technology. The discussion revolved around the current state of AI, highlighting the urgency to address safety concerns and the need to update our institutions for a post-AI world. It was truly an eye-opening conversation that made me realize the potential risks and benefits of large language models like GPT-4.
The debate aimed to give voice to AI safety experts who might not have the platform to speak out. The core question was: what will it take to get AI right? As I learned, there might be only one chance to do it correctly, so we need to act quickly. Even AI researchers have expressed concerns, with 50% of them believing with a 10% or greater chance that humanity could go extinct if AI is not regulated properly.

Both Aza and Tristan acknowledged AI’s potential benefits, such as combating climate change, but also emphasized the importance of addressing dystopian outcomes. They drew parallels with Robert Oppenheimer’s Manhattan Project in 1944, calling for responsible and secure deployment of transformative technologies like AI.
As the podcast continued, they explored the concept of “takeoff,” where AI becomes smarter than humans in various areas and begins to self-improve. While not the primary concern, it does highlight the need for caution in AI development. In 2017, a significant change occurred in AI, with the merging of various subfields of machine learning, leading to the development of large language models that treat everything as language, including images and sound.
These large language models, referred to as “Gollums,” have seemingly emergent powers that weren’t intentionally built into them. It’s crucial to consider the potential consequences of these powerful technologies, ensuring responsible development and deployment to avoid negative impacts on society.
The podcast also discussed the implications of democratizing AI. While greater access to AI technology may benefit society in many ways, uncontrolled democratization can lead to unintended consequences and potential harm. It’s essential to strike a balance between harnessing AI’s potential for good and mitigating its risks.
As AI evolves, it’s crucial to be forward-thinking and anticipate challenges. Society must adapt to these technological advancements and ensure proper safety measures are in place to mitigate potential risks. Yuval Harari’s comparison of AI to nukes highlights the potential impact of AI on society, creating a cognitive blind zone that makes it challenging for even experts to anticipate AI’s development and potential consequences.
The podcast ended on a hopeful note, emphasizing our collective agency in shaping AI’s future. By involving experts, stakeholders, and decision-makers in these discussions, we can build a coordinated response to the challenges posed by AI. As AI advances at an exponential pace, it’s essential that we come together to find solutions and navigate this new era.
The Center for Humane Technology, through initiatives like the Your Undivided Attention podcast, aims to raise awareness, engage experts, and facilitate these important discussions. By participating in these conversations and staying informed about AI’s challenges and opportunities, each of us can play a role in shaping a future where technology serves the greater good.
If you’re interested in learning more or have questions or concerns, reach out to the Center for Humane Technology. Together, we can work towards a safer, more responsible, and humane future with AI.