| | |

Microsoft guide to responsible AI governance

Microsoft guide to responsible AI governance, ensures alignment with their Responsible AI Standard and AI Principles.

Microsoft has worked with OpenAI to develop a customized set of capabilities and techniques to join cutting-edge AI technology and web search in the new Bing. Microsoft has also harnessed the full power of its responsible AI ecosystem to prepare for the launch of the new Bing experience, which has been developed in line with Microsoft’s AI Principles and Responsible AI Standard, and in partnership with responsible AI experts across the company, including Microsoft’s Office of Responsible AI, engineering teams, Microsoft Research, and the Aether Committee. Additionally, Microsoft has invested in academic research programs to ensure researchers outside Microsoft can access the company’s foundation models and the Azure OpenAI Service to undertake research and validate findings.

Microsoft guide to responsible AI governance

The original publication is titled “Governing AI: A Blueprint for the Future” and was published by Microsoft on May 25, 2023. It is a comprehensive guide to responsible AI governance that spans 42 pages and is intended to provide a framework for organizations to develop and implement responsible AI practices. The guide covers a range of topics, including Microsoft’s approach to responsible AI, operationalizing responsible AI at Microsoft, applying responsible AI to specific AI applications, and empowering customers on their responsible AI journeys. The guide is intended to help organizations understand the importance of responsible AI and to provide them with the tools and resources they need to deploy AI responsibly.

The foreword discusses the importance of governing artificial intelligence in a responsible and ethical manner. Brad Smith argues that AI has the potential to transform the world for the better, but that it also presents significant risks and challenges that must be addressed. He suggests that a collaborative approach is needed, with governments, businesses, and civil society working together to develop policies and practices that ensure AI is used for the greater good. The foreword concludes by emphasizing the need for ongoing dialogue and engagement on this issue, and the importance of acting now to shape the future of AI governance.

Part 1 Governing AI: A Blueprint for the Future

This chapter provides an overview of the current state of artificial intelligence and its potential impact on society. The authors discuss the rapid pace of AI development and the many ways in which it is already being used in fields such as healthcare, education, and transportation. They also highlight the potential benefits of AI, such as increased efficiency, improved decision-making, and new opportunities for innovation. However, the authors also acknowledge the risks and challenges associated with AI, including job displacement, bias and discrimination, and the potential for AI to be used for malicious purposes. Part 1 concludes by emphasizing the need for responsible AI governance that considers both the potential benefits and risks of this powerful technology.

Implement and build upon new government-led AI safety frameworks.

The first point of the five-point blueprint for governing AI is to implement and build upon new government-led AI safety frameworks. Microsoft suggests that one of the most effective ways to move quickly in this area is to build on recent advances in governmental work that advance AI safety. They argue that this makes far more sense than starting from scratch, especially when there is a recent and strong footing on which to start. Microsoft also suggests that broader international collaboration is needed to ensure that AI safety frameworks are effective and widely adopted. Overall, the goal of this point is to establish a strong foundation for AI governance that prioritizes safety and responsible use of this powerful technology.

An example of implementing and building upon new government-led AI safety frameworks is the European Union’s General Data Protection Regulation (GDPR). The GDPR is a comprehensive data protection regulation that came into effect in May 2018 and applies to all companies that process the personal data of EU citizens. The regulation includes provisions related to the use of AI, such as the right to explanation, which requires organizations to provide individuals with an explanation of how automated decisions are made. The GDPR also includes provisions related to data protection impact assessments, which require organizations to assess the potential risks associated with the use of AI and to implement appropriate safeguards to mitigate those risks. The GDPR is an example of a government-led AI safety framework that organizations can build upon to ensure that they are deploying AI responsibly and in compliance with applicable regulations.

Require effective safety brakes for AI systems that control critical infrastructure.

The second point of the five-point blueprint for governing AI is to require effective safety brakes for AI systems that control critical infrastructure. Microsoft argues that as AI becomes more powerful, there is a growing need to ensure that it can be controlled and that unintended consequences can be avoided. They suggest that safety brakes should be built into AI systems that control critical infrastructure, such as the electrical grid, water system, and city traffic flows. Microsoft proposes new safety requirements that would create safety brakes for AI systems that control the operation of designated critical infrastructure. The goal of this point is to ensure that AI systems remain under human control and that they can be deactivated or disengaged in the event of unintended behavior.

An example of requiring effective safety brakes for AI systems that control critical infrastructure is the use of autonomous vehicles in the transportation industry. Autonomous vehicles rely on AI systems to operate, and there is a risk that these systems could malfunction and cause accidents. To mitigate this risk, the National Highway Traffic Safety Administration (NHTSA) in the United States has issued guidelines for the safe deployment of autonomous vehicles. These guidelines include requirements for safety brakes that can bring the vehicle to a safe stop in the event of a malfunction or other emergency. The guidelines also require that autonomous vehicles be equipped with a human-machine interface that allows the driver to take control of the vehicle if necessary. The NHTSA guidelines are an example of requiring effective safety brakes for AI systems that control critical infrastructure and demonstrate the importance of ensuring that AI systems are designed and deployed in a safe and responsible manner.

The third point of the five-point blueprint for governing AI, proposed by Microsoft, is to develop a broad legal and regulatory framework based on the technology architecture for AI. The authors suggest that the law should reflect the technology architecture for AI itself, and that different regulatory responsibilities should be placed upon different actors based on their role in managing different aspects of AI technology. They propose that different laws place specific regulatory responsibilities on the organizations exercising certain responsibilities at three layers of the technology stack:

  • the applications layer,
  • the model layer,
  • and the infrastructure layer.

Microsoft argues that existing legal protections should be applied to the use of AI, especially at the applications layer where the safety and rights of people will most be impacted. The goal of this point is to ensure that AI is used ethically and responsibly, and that the legal and regulatory framework keeps pace with the rapid development of AI technology.

An example of developing a broad legal and regulatory framework based on the technology architecture for AI is the European Union’s proposed Artificial Intelligence Act. The proposed regulation aims to create a comprehensive legal framework for AI in the EU and is based on a risk-based approach that takes into account the potential harm that AI systems could cause. The regulation includes provisions related to transparency, accountability, and human oversight of AI systems, as well as requirements for data quality and bias mitigation. The proposed regulation also includes a list of high-risk AI applications, such as facial recognition and autonomous vehicles, that will be subject to additional requirements and oversight. The Artificial Intelligence Act is an example of developing a broad legal and regulatory framework based on the technology architecture for AI and demonstrates the importance of ensuring that AI is developed and deployed in a responsible and ethical manner.

Promote transparency and ensure academic and nonprofit access to AI.

Microsoft’s fourth point in the five-point blueprint for governing AI is to promote transparency and ensure academic and nonprofit access to AI. The authors argue that transparency is critical to building trust in AI systems and ensuring that they are used ethically and responsibly. They propose that Microsoft will release an annual transparency report to inform the public about its policies, systems, progress, and performance in managing AI responsibly and safely. Additionally, they suggest that academic researchers and the nonprofit community should have access to AI resources for research purposes. Microsoft argues that basic research, especially at universities, has been of fundamental importance to the economic and strategic success of the United States since the 1940s. They propose new steps, including steps Microsoft will take, to address these priorities. The goal is to ensure that AI is developed and used in a transparent and responsible manner, and that academic and nonprofit researchers have access to the resources they need to advance the field.

An example of promoting transparency and ensuring academic and nonprofit access to AI is the OpenAI organization. OpenAI itself, due in part is a nonprofit research organization that aims to promote and develop friendly AI for the benefit of humanity. The organization is committed to transparency and has made many of its research papers and AI models available to the public. OpenAI has also developed an API that allows developers to access its AI models and has made this API available to academic researchers and nonprofit organizations for free. By providing access to its AI models and research, OpenAI is promoting transparency and ensuring that academic and nonprofit organizations have access to the tools and resources they need to develop and deploy AI in a responsible and ethical manner.

Pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology.

Microsoft’s fifth point in the five-point blueprint for governing AI is to pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology. The authors suggest that there is enormous opportunity to bring the public and private sectors together to use AI as a tool to improve the world, including by countering the challenges that technological change inevitably creates. They argue that democratic societies can accomplish much when they harness the power of technology and bring the public and private sectors together. The authors propose that new public-private partnerships should be established to address the impact of AI on society. They suggest that AI can be used to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and advance the planet’s sustainability needs. The goal of this point is to ensure that AI is used for the greater good and that the public and private sectors work together to address the challenges and opportunities presented by this powerful technology.

An example of pursuing new public-private partnerships to use AI as an effective tool to address societal challenges is the partnership between Microsoft and the United Nations Development Programme (UNDP). In 2019, Microsoft and UNDP announced a new partnership to use AI and other technologies to address some of the world’s most pressing challenges, including poverty, inequality, and climate change. The partnership aims to leverage Microsoft’s AI expertise and UNDP’s global reach to develop innovative solutions that can help achieve the United Nations’ Sustainable Development Goals. As part of the partnership, Microsoft has committed to providing $5 million in funding and technical support to UNDP’s Accelerator Labs, which are designed to identify and scale innovative solutions to development challenges. The partnership between Microsoft and UNDP is an example of pursuing new public-private partnerships to use AI as an effective tool to address societal challenges and demonstrates the potential of AI to drive positive social impact.

Part 2: Responsible by Design: Microsoft’s Approach to Building AI Systems that Benefit Society

Part 2 provides a detailed framework for responsible AI governance. Microsoft argues that AI governance should be based on six key principles:

  • fairness,
  • reliability and safety,
  • privacy and security,
  • inclusivity,
  • transparency,
  • and accountability.

They provide specific recommendations for how each of these principles can be incorporated into AI governance frameworks, including the need for clear standards and guidelines, independent oversight, and ongoing evaluation and improvement. The authors also emphasize the importance of collaboration between governments, industry, academia, and civil society to ensure that AI is developed and used in a responsible and ethical manner. Overall, Part 2 provides a comprehensive roadmap for AI governance that prioritizes the safety, fairness, and inclusivity of this powerful technology.

Microsoft’s commitment to developing AI responsibly.

Microsoft is committed to developing AI responsibly and has been working on advancing responsible AI for the past seven years. The company has established a responsible AI program that involves coordination from the Office of Responsible AI and essential involvement across every part of the company. This includes core responsible AI teams in engineering, research, and policy, embedded Responsible AI Champions throughout organizations, executive leadership and accountability as embodied in the Responsible AI Council, and oversight from Microsoft’s Board. Microsoft is also investing in the talent it already has and hiring new and diverse talent to grow its responsible AI ecosystem. The company believes that with the right commitments and investments, AI can be governed responsibly and used for the greater good.

An example of Microsoft’s commitment to developing AI responsibly is the company’s AI principles, which were first introduced in 2018. These principles are based on five core values. Microsoft has committed to embedding these principles into its AI products and services and to working with customers and partners to ensure that they are also deploying AI responsibly. Microsoft has also been actively involved in developing responsible AI standards and guidelines, including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Partnership on AI to Benefit People and Society. These initiatives demonstrate Microsoft’s commitment to developing AI responsibly and to promoting the responsible use of AI across the industry.

Operationalizing Responsible AI at Microsoft

Microsoft outlines its approach to implementing responsible AI governance within the company. The company recognizes that responsible AI must be supported by the highest levels of leadership and championed at every level across the organization. To that end, Microsoft has developed a governance system that incorporates many diverse teams and functions across the company. Core teams within engineering, research, and policy play critical roles in advancing responsible AI, each bringing a set of unique skills. Responsible AI roles are also embedded within product, engineering, and sales teams by the appointment of “Responsible AI Champions” by leadership. Microsoft’s Office of Responsible AI continues to evolve the governance structure to enable progress and accountability as a foundational piece of the company’s responsible AI program. The goal is to ensure that AI is developed and used in a responsible and ethical manner across all aspects of the company.

Case study: Applying our Responsible AI approach to the new Bing

Microsoft describes how it applied its Responsible AI approach to the development of the new Bing search engine. Guided by Microsoft’s AI Principles and Responsible AI Standard, the company sought to identify, measure, and mitigate potential harms and misuse of the new Bing while securing the transformative and beneficial uses that the new experience provides. Microsoft conducted extensive red teaming in collaboration with OpenAI to assess how the latest technology would work without any additional safeguards applied to it. The company also implemented several touchpoints for meaningful AI disclosure, where users are notified that they are interacting with an AI system and about opportunities to learn more about the new Bing. Microsoft’s approach to identifying, measuring, and mitigating harms will continue to evolve as the company learns more and makes improvements based on feedback gathered during the preview period and beyond. The goal of this cast study is to demonstrate how Microsoft’s Responsible AI approach can be applied to specific AI applications to ensure that they are developed and used in a responsible and ethical manner.

Advancing Responsible AI through company culture

Microsoft emphasizes the importance of building a culture committed to the principles and actions of responsible AI. The company recognizes that procedures and standards are a critical part of operationalizing responsible AI, but they must be complemented by a culture that prioritizes responsible AI at every level of the organization. Microsoft invests in talent focused on AI and embeds ownership of responsible AI in every role to deepen its culture of advancing responsible AI. The company also prioritizes diversity, collaboration, and the capacity to see AI systems through a sociotechnical lens. Microsoft believes that its people are the core of its culture and that every individual contributes to its mission and goals. The goal is to emphasize that responsible AI must be a part of the company’s culture and that every employee must be committed to advancing responsible AI.

An example of advancing responsible AI through company culture is Microsoft’s AI Business School. The AI Business School is an initiative launched by Microsoft in 2019 to help business leaders and executives understand the potential of AI and how to implement it in a responsible and ethical manner. The program includes a series of online courses and resources that cover topics such as AI strategy, responsible AI, and AI ethics. The AI Business School is an example of how Microsoft is investing in the talent it already has to develop skills and empower them to think broadly about the potential impact of AI systems on individuals and society. By providing employees with the knowledge and tools they need to develop and deploy AI responsibly, Microsoft is advancing responsible AI through its company culture and helping to ensure that AI is developed and used in a way that benefits everyone.

Empowering customers on their Responsible AI journeys

Microsoft highlights its commitment to helping customers on their responsible AI journey by sharing its learnings with them. The company provides transparency documentation for its platform AI services in the form of Transparency Notes to empower its customers to deploy their systems responsibly. These notes communicate in clear, everyday language the purposes, capabilities, and limitations of AI systems so that customers can understand when and how to deploy Microsoft’s platform technologies. Microsoft also provides practical tools to operationalize responsible AI practices, including digital learning paths that empower leaders to craft an effective AI strategy, foster an AI-ready culture, innovate responsibly, and more. The company believes that its efforts alone are not enough to secure the societal gains envisioned when responsible AI practices are adopted, and that it is important to empower customers to deploy AI responsibly. The goal of this section is to demonstrate Microsoft’s commitment to responsible AI and to provide customers with the tools and resources they need to deploy AI responsibly.

An example of empowering customers on their responsible AI journeys is Microsoft’s Responsible AI program, which includes a range of resources and tools to help customers deploy AI responsibly. One of the key resources provided by the program is Transparency Notes, which are designed to provide customers with clear and concise information about the purposes, capabilities, and limitations of Microsoft’s AI systems. The Transparency Notes also identify use cases that fall outside the solution’s capabilities and the Responsible AI Standard. By providing this information, Microsoft is empowering customers to make informed decisions about how to deploy AI in a responsible and ethical manner. In addition to Transparency Notes, Microsoft also provides digital learning paths that empower leaders to craft an effective AI strategy, foster an AI-ready culture, innovate responsibly, and more. These resources can be found online at https://aka.ms/rai. By providing these resources, Microsoft is empowering customers on their responsible AI journeys and helping to ensure that AI is developed and used in a way that benefits everyone.

Conclusions

In the closing conclusion, Microsoft emphasizes the importance of responsible AI and the need for a comprehensive approach to AI governance. The company believes that AI has the potential to transform society and improve people’s lives, but it must be developed and used in a responsible and ethical manner. Microsoft has been working on advancing responsible AI for the past seven years and has established a responsible AI program that involves coordination from the Office of Responsible AI and essential involvement across every part of the company. The company has also developed a comprehensive framework for responsible AI governance that prioritizes the safety, fairness, and inclusivity of this powerful technology. Microsoft recognizes that responsible AI must be supported by the highest levels of leadership and championed at every level across the organization. The company is committed to empowering its customers to deploy AI responsibly and to providing them with the tools and resources they need to do so. Overall, Microsoft’s goal is to ensure that AI is developed and used in a responsible and ethical manner to benefit society as a whole.

Resources

Similar Posts