- Digestible AI
- Posts
- Major Announcements from OpenAI DevDay
Major Announcements from OpenAI DevDay
+ Pika 1.5 Adds WILD New Features 💥

Devs had a field day at OpenAI’s DevDay this week, learn how to edit images right in Instagram, and Pika dropped something that is distorting my world…
In this edition we’ll be covering…
Developments from OpenAI DevDay
How to edit photos with Meta’s new model, Llama 3.2, in Instagram
Microsoft’s new Copilot Vision 👀
And much more…
Let’s get into it!
OpenAI’s New APIs, Fine-Tuning, and a $157B Valuation
OpenAI has already had a wild week, but this Tuesday, their DevDay out in San Francisco did not disappoint… and of course, the devs loved it.
MAJOR developments include:
Realtime API: With the new Realtime API, developers can now create low-latency, speech-to-speech experiences. The same technology is used in the API for developers to build with as in ChatGPT’s Advanced Voice Mode.
Prompt Caching: This is similar to what Anthropic released for Claude a few weeks ago, and this technique saves developed 50% on cached inputs over uncached inputs and reduces latency by reusing recently seen input tokens.
Model Distillation: This tool allows you to train compact models on the knowledge of larger models directly on the OpenAI platform.
Vision Fine-Tuning: You can now fine-tune GPT-4o with both text and images, enabling applications like advanced search, improved object detection, and more accurate image analysis.
So What?
For folks building with OpenAI’s APIs, these are huge innovations that will enable more powerful, flexible, and accessible applications. I just know someone’s going to try to rebuild Siri with the Realtime API.
The team is locked and loaded after completing a deal to raise $6.6B, resulting in a $157B valuation.
Get Your Hands Dirty!
Edit Images in Instagram with Llama 3.2

Image from: CNET
Meta AI (now powered by Llama 3.2) is taking photo editing to a whole new level.
Using natural language, you can now add fun backgrounds, change colors, or apply unique aesthetics to your photos.
The new model will be available across the entire Meta family of apps, so everyone will have access to it at any time.
Here’s how you can edit photos right on Instagram:
Open Instagram on your smartphone.
Navigate to Direct Messages (Messages icon on the top right)
Start a new chat, pick Meta AI
Upload an image and edit it by only using text!
Microsoft’s Copilot Get Eyes 👀
Microsoft is rolling out Copilot Vision, a feature that allows its AI assistant to understand and interact with the content on your screen.
Part of the new Copilot Labs initiative, Vision can analyze webpages, suggest actions, and answer questions about what you’re viewing, all through natural language commands.
Microsoft says this is an “opt-in” feature that respects user privacy, ensuring no data is stored or used for training. Initially limited to a few approved sites, Vision aims to offer a richer, more context-aware AI experience.
So What?
This is a great step forward for AI usability. Imagine being on a working session with your AI assistant… and it can tell you exactly what to type and where to click.
Concerns around privacy of course still remain, but Microsoft has stated they are taking “important steps” to ensure all Responsible AI protocols are met.
Industry Intel
Pika’s New Model is Distorting Reality
Sry, we forgot our password.
PIKA 1.5 IS HERE.With more realistic movement, big screen shots, and mind-blowing Pikaffects that break the laws of physics, there’s more to love about Pika than ever before.
Try it.
— Pika (@pika_labs)
3:49 PM • Oct 1, 2024
After a year of relative quiet, Pika Labs is back with a bang, introducing Pika 1.5, a revamped version of its text-to-video AI model that’s packed with unique special effects—aptly named “Pikaffects.”
Users can now transform objects in their videos with effects like “explode it,” “melt it,” or the internet’s new favorite, “cake-ify it.”
This is really really cool but also somewhat terrifying. Pika is really coming with some heat, check out this quick video of the “squish it” effect:

Video from: @ytjessie_ via X
If you want to know how to get started making creepy videos like that one with Pika 1.5, smash the button below!
Quick Bites
Stay updated with our favorite highlights, dive in for a full flavor of the coverage!
Starting this week, Advanced Voice is rolling out to all ChatGPT Enterprise, Edu, and Team users globally. Free users will also get a sneak peek of Advanced Voice.
Plus and Free users in the EU…we’ll keep you updated, we promise.
— OpenAI (@OpenAI)
6:14 PM • Oct 1, 2024
OpenAI’s Advanced Voice Mode will be rolling out to Free users in some capacity.
NVIDIA has released a powerful open-source artificial intelligence model that competes with proprietary systems from industry leaders like OpenAI and Google.
Cerebras Systems Inc., a maker of chips and other tech infrastructure for artificial intelligence, filed for an IPO.
Google is now working on its own “reasoning” LLM, following the release of OpenAI’s o1 model.
TechCrunch asked Meta if it plans to train AI models on the images from Ray-Ban Meta’s users, as it does on images from public social media accounts. The company wouldn’t say.
The Neural Network
Thought this was fitting giving how OpenAI is moving these days… (someone go animate this with Pika 1.5 please 🤣 )

Until We Type Again…
How did we do?This helps us create better newsletters! |
If you have any suggestions or specific feedback, simply reply to this email or fill out this form. Additionally, if you found this insightful, don't hesitate to engage with us on our socials and forward this over to your friends!
You can find our past newsletter editions here!
This newsletter was brought to you by Digestible AI. Was this forwarded to you? Subscribe here to get your AI fix delivered right to your inbox. 👋