Today, we are pleased to announce the public preview of Deep Research in Azure AI Foundry—an API and SDK-based offering that integrates OpenAI’s advanced research capabilities with Azure’s enterprise-grade platform.

With Deep Research, developers can create agents that plan, analyze, and synthesize information from across the web, automate complex research tasks, and generate transparent, auditable outputs that connect seamlessly with other tools and agents within Azure AI Foundry.

Transforming Research Automation with AI Agents

Generative AI and large language models (LLMs) have revolutionized research efficiency, exemplified by tools like ChatGPT Deep Research and Researcher in Microsoft 365 Copilot, optimizing productivity and document workflows for millions. As organizations look to integrate deep research capabilities directly into their applications and automate multi-step processes, the demand for programmable and auditable research automation becomes essential.

Building and Orchestrating Agents with Deep Research

Deep Research in the Foundry Agent Service is designed for developers eager to expand beyond traditional chat interfaces. With its composable methodology available through an API and SDK, Azure AI Foundry allows users to:

  • Automate web-scale research: Utilize a premier research model backed by Bing Search, ensuring every insight is traceable and sourced appropriately.
  • Programmatically build agents: Develop agents that can be invoked within apps and workflows, transforming deep research into a reusable, production-ready service.
  • Orchestrate complex workflows: Combine Deep Research agents with Logic Apps and Azure Functions to automate notifications and reporting tasks.
  • Ensure enterprise governance: Leverage Azure AI Foundry’s security and compliance features for full control and visibility over research processes.

Powering Deep Research Tasks with Azure AI

Architected for flexibility and transparency, Deep Research is designed to meet the demands of robust business applications. The core research model, o3-deep-research, streamlines a multi-step process integrated with Grounding with Bing Search, utilizing the latest OpenAI models for optimized research execution.

Streamlined Task Execution

Once a research query is submitted, the agent clarifies intent and gathers context using GPT-series models. With the task defined, the agent securely accesses recent web data through Bing Search. This process mitigates the possibility of inaccuracies resulting from irrelevant content.

The execution involves analyzing and synthesizing information while providing a comprehensive and nuanced output that documents reasoning paths and source citations, essential for compliance in regulated industries.

Integration and Customization

Deep Research’s API enables invocation from various platforms, allowing seamless integration into internal portals, business applications, or as part of a larger agent ecosystem. Organizations can customize their agents to suit specific needs, enhancing functionality across diverse enterprise workflows ranging from analytics to regulatory compliance.

Pricing and Availability

Pricing for the Deep Research model is as follows:

  • Input: $10.00 per 1 million tokens
  • Cached Input: $2.50 per 1 million tokens
  • Output: $40.00 per 1 million tokens

Please note that charges for Grounding with Bing Search and the underlying GPT model will also apply.

Getting Started with Deep Research

Deep Research is currently available in a limited public preview for Azure AI Foundry Agent Service customers. To get started:

  • Sign up for the limited public preview for early access.
  • Visit our documentation for more details.
  • Check out our learning modules to build your first agent.

We look forward to the innovative solutions you will create. Stay tuned for customer success stories, future feature enhancements, and ongoing updates that will unlock the next generation of enterprise AI agents.