- Digestible AI
- Posts
- OpenAI Stuffs Stockings with Search
OpenAI Stuffs Stockings with Search
+ Google's Video Reality Check 🎥

ChatGPT Search is released to everyone…
In this edition we’ll be covering…
OpenAI’s latest gift
A breakdown of Google’s Sora alternative
A tutorial on how to build a RAG pipeline
5 trending AI signals
3 AI tools you can use
And much more…
Let’s get into it!
Sam Altman Stuffs our Stockings with Search
Folks, yesterday was Day 8 of OpenAI’s Shipmas. The craziness they are throwing at us in such a short amount of time is literally unprecedented, but hey, I’m not one to complain…
What’s the scoop?
OpenAI has expanded its AI search engine capabilities within ChatGPT to all users, including those on the free tier (but you still need an account).
Premium users are getting their own upgrade with Advanced Voice Mode leveling up with Search. Imagine asking your AI about hotel bookings and getting not just answers, but actual links to relevant websites.
Search has also been further optimized for mobile devices.
Quick refresher: You can enable Search on ChatGPT by clicking on the “globe” icon directly under the message bar.
So What?
Sure, OpenAI had technically released search back in October, but its expansion to free users marks a significant democratization of AI capabilities.
Speaking from personal experience, this feature has become so reliable that it's actually replaced Perplexity in my daily workflow…
Together with: AI Tool Report
There’s a reason 400,000 professionals read this daily.
Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.
Industry Intel
Google VEO 2, Better Than Sora?
Just when we thought OpenAI's Sora had the video generation spotlight all to itself, Google steps into the ring with not one, but two major announcements. Talk about perfect timing!
Google has recently introduced significant advancements in AI-driven video and image generation through updates to its Veo and Imagen models:
Google announced the next iteration of its text-to-video model, aptly named Veo 2. The biggest win here? Improved understanding of physics and camera operations. This advancement results in sharper textures and images, particularly in dynamic scenes.
An updated version of Imagen 3 (text-to-image model) is being introduced to users of Google’s ImageFX tool. This enhancement enables the creation of more vibrant and well-composed images across various styles, including photorealism, impressionism, and anime.
So What?
Yes, VideoFX is still playing hard to get behind a waitlist, but Google's opening the gates wider this week. And while physics improvements are impressive, we're still in the early days of this technology.
The days of wonky video generations might be numbered. The next challenge? Getting access to actually try it out…
Get Your Hands Dirty!
Building a RAG pipeline with LlamaIndex
Retrieval augmented generation (RAG) is a huge concept in the AI engineering space, but why is it so important?
LLMs are trained on extensive datasets but may lack access to specific, up-to-date information that you care about. RAG addresses this limitation by integrating your data with the LLM’s existing knowledge base, so that you can talk to the model about your own data, neat right?
Here’s a simple implementation with LlamaIndex using Python:
Install Packages
pip install llama-index openai
Load and Index your documents
from llama_index import SimpleDirectoryReader, VectorStoreIndex
# Load documents from a directory
documents = SimpleDirectoryReader('path_to_your_documents').load_data()
# Create a vector store index
index = VectorStoreIndex.from_documents(documents)
Set up the Query Engine
# Initialize the query engine
query_engine = index.as_query_engine()
Execute Queries
# Execute a query
response = query_engine.query("Your question here")
# Print the response
print(response)
And there’s a functioning RAG pipeline! For a more in-depth view, check out the docs below 👇️
Quick Bites
Stay updated with our favorite highlights, dive in for a full flavor of the coverage!
Just 10 days after o1's public debut, we’re thrilled to unveil the open-source version of the groundbreaking technique behind its success: scaling test-time compute 🧠💡
By giving models more "time to think," LLaMA 1B outperforms LLaMA 8B in math—beating a model 8x its size.… x.com/i/web/status/1…
— clem 🤗 (@ClementDelangue)
7:32 PM • Dec 16, 2024
Hugging Face uses test-time compute for open source models, this technique was used to develop OpenAI’s o1.
The hosts of Y-Combinator’s Lightcone reflect on this year’s biggest AI startup trends, moments and setbacks.
Google is rolling out Gemini 2.0 Flash in NotebookLM, featuring a new content-focused interface, interactive AI hosts in Audio Overview, and NotebookLM Plus for power users and teams with enhanced features and usage limits.
Microsoft introduced Phi-4, a 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing.
Anthropic has released Claude 3.5 Haiku to Claude users. 3.5 Haiku, which Anthropic unveiled in November, matches or bests the performance of Anthropic’s outgoing flagship model, 3 Opus, on specific benchmarks.
Trending Tools
🌊 Waveloom - Build and deploy AI workflows visually.
🦌 MooseMail - Tool created for LinkedIn lead generation.
🗒️ Aftercare - Turn surveys into conversations.
Remember, you can refer your friends to get access to our FULL Master Database with 50+ Tools (at the bottom of each newsletter)!
The Neural Network
In a recent blog post put out by OpenAI dissecting email correspondence with Elon Musk, Ilya Sutskever (cofounder of OpenAI), had this to say:
This was back in 2017 and it’s interesting to see where we are now. I think the only thing we have now is “compelling chatbots” though…
Until We Type Again…
Thank you for reading yet another edition of Digestible AI!
How did we do?This helps us create better newsletters! |
If you have any suggestions or specific feedback, simply reply to this email or fill out this form. Additionally, if you found this insightful, don't hesitate to engage with us on our socials and forward this over to your friends!
You can find our past newsletter editions here!
This newsletter was brought to you by Digestible AI. Was this forwarded to you? Subscribe here to get your AI fix delivered right to your inbox. 👋