The timing of the recent AI for Science Forum, held by Google DeepMind and the Royal Society in London, coincided perfectly with the awarding of Nobel prizes in scientific fields. The excitement surrounding AI’s advancements was palpable, particularly after Google DeepMind secured the Nobel prize in chemistry, a day after AI’s triumph in the physics category.
Demis Hassabis, CEO of Google DeepMind, emphasized the monumental potential AI has to usher in a transformative era for scientific discovery, akin to a renaissance. However, he cautioned that achieving this potential requires precise problem-solving, quality data, and sophisticated algorithms. Despite the optimism, there are serious concerns surrounding AI’s deployment.
Hassabis highlighted various challenges, including the risks of exacerbating social inequalities, causing financial crises, or even triggering catastrophic data breaches. Siddhartha Mukherjee, a prominent cancer researcher, echoed these concerns, predicting the occurrence of an “AI Fukushima” — a catastrophic event likened to the nuclear disaster from the 2011 tsunami in Japan. Such forewarnings illustrate the delicate balance needed as society navigates AI’s capabilities.
Despite these risks, there are many promising applications of AI being explored around the globe. For example, AI-assisted ultrasound scans are being trialed in Nairobi, significantly reducing the training time for nurses. In London, Materiom employs AI to develop entirely bio-based materials, avoiding fossil fuels. Furthermore, advancements in medical imaging and climate modeling are transforming their respective fields.
The work of Hassabis and his colleague John Jumper has notably revolutionized drug design with AlphaFold, predicting protein structures and interactions. Ongoing enhancements to accelerate drug development could shorten the long process of creating new treatments from years to mere months, heralding a new era in biomedicine.
The complexity of AI decision-making, often referred to as the black box problem, presents a substantial obstacle for researchers. However, Hassabis optimistically stated that new advancements may help clarify these processes within the next five years, increasing trust in AI applications.
Concerns about AI’s energy consumption amid the climate crisis pose additional challenges. Training large AI models, such as OpenAI’s ChatGPT, can consume over 10 gigawatt-hours of power — enough to power 1,000 homes for a year. While Hassabis argues that the total benefits will outweigh these energy costs, skeptics, like Asmeret Asefaw Berhe, emphasize the dire need for sustainable practices in AI development.
AI developers are under increasing pressure to ensure their innovations do not exacerbate the climate crisis. Discussions at the forum revealed a common hope that increasing energy demand will spur investments in renewable energy sources. The pathway to realizing the benefits of AI while maintaining sustainability will require collaborative efforts and transformative change.