OpenAI DevDay Unveils GPT-4 Turbo
At Datatunnel, we’re excited to report on the OpenAI DevDay, a landmark event where the cutting-edge “GPT-4 Turbo” was unveiled among other significant advancements. Dive deep into the intricacies of these developments on the OpenAI blog, relive the moments through the keynote recording, or stay tuned to the live buzz by following the new @OpenAIDevs Twitter account.

Here’s the essence of what’s new and exciting:
Introducing GPT-4 Turbo
- OpenAI raised the bar with the launch of GPT-4 Turbo, touted as their most sophisticated AI model to date. It boasts a staggering 128K context window and is up-to-date with global events until April 2023.
- They’ve slashed the cost of using GPT-4 Turbo – input tokens now cost just $0.01 per 1K, and output tokens are $0.03 per 1K, substantially lowering the barrier to access.
- Enhanced function calling is in place, allowing multiple functions within a single message and ensuring the accuracy of function parameters with JSON mode.
- A ‘reproducible outputs’ beta feature ensures that model outputs are consistent.
- The preview model, accessible via the gpt-4-1106-preview API endpoint, is just a taste of what’s to come, with a full production model set to roll out later this year.
Updated GPT-3.5 Turbo
- The gpt-3.5-turbo-1106 update offers a default 16K context window and more affordable pricing tiers, marking a new era of extended interaction at a fraction of the cost.
- The fine-tuned version of GPT-3.5 becomes even more economical, with prices cut down by 75% for input and 62% for output tokens.
- It aligns with GPT-4 Turbo, featuring improved function calling and the beta for reproducible outputs.
Assistants API Beta Release
- OpenAI is set to redefine app development with its Assistants API, enabling the creation of agent-like applications that can seamlessly incorporate complex tasks.
- This innovation allows developers to craft AI assistants tailored to specific roles, from natural language data analysis to AI-powered vacation planning.
- Developers can leverage persistent Threads for better state management and tap into new tools like Code Interpreter, Retrieval, and Function Calling.
- Experimentation is made easy with the Playground platform, requiring no code to get started.
Multimodal Capabilities
- GPT-4 Turbo now interprets visual inputs in the Chat Completions API, paving the way for applications like image captioning and visual analysis.
- The vision features can be tested using the gpt-4-vision-preview model, and these will be integrated into GPT-4 Turbo’s full version later in the year.
- The DALL·E 3 API for image generation and the new TTS model for text-to-speech capabilities enrich the API ecosystem, making AI more versatile and sensory-rich.
Customizable GPTs in ChatGPT
- OpenAI has launched GPTs, a feature that amalgamates instructions, data, and various capabilities into a customizable ChatGPT iteration.
- This customization extends beyond OpenAI’s proprietary tools like DALL·E, empowering developers with the ability to define actions themselves for a tailored AI experience.
We at Datatunnel are eager to see the innovative applications these updates will inspire among developers and AI enthusiasts alike.
Resources
- New models and developer products announced at DevDay (openai.com)
- @OpenAIDevs Twitter
- OpenAi reduced pricing
- OpenAI Function calling
- JSON mode
- Reproducible outputs
- Open AI Fine=Tuning
- OpenAI Assistants API
- Assistants API Playground
- OpenAI Visual Inputs
- OpenAI DALL-E 3
- OpenAI Image Generation API
- OpenAI TTS Features
- OpenAI GPTs