In this episode of the TWIML AI Podcast, host Sam Charrington interviews Nicholas Carlini, a research scientist at Google DeepMind, to discuss his award-winning paper on adversarial machine learning and model security. They explore the concept of model stealing, specifically how to extract the last layer of production language models like ChatGPT and PaLM-2. Carlini explains the implications of this research for AI security, the ethical concerns surrounding model privacy, and the challenges of real-time conversation systems. The conversation also touches on the advancements in AI security research and the remediation strategies employed by companies like OpenAI and Google to protect their models. Carlini shares insights into the future of AI security and the potential risks associated with powerful language models.

The TWIML AI Podcast with Sam Charrington
Not Applicable
September 28, 2024
PT1H3M1S