- Digestible AI
- Posts
- Google Makes Your Terminal Smarter
Google Makes Your Terminal Smarter
+ Use Voice AI That Actually Does Things🎙️

Anthropic's shocking study reveals AI models willing to blackmail executives, Google launches Gemini CLI for developers, and ElevenLabs debuts voice assistants that actually take action…
In this edition we’ll be covering…
Google’s free Gemini CLI launch with massive usage limits
A tutorial on how to integrate applications with ElevenLabs’ voice assistant with 11.ai
Anthropic’s study on AI models resorting to blackmail
5 trending AI signals
3 AI tools to supercharge your productivity
And much more…
The Latest in AI
Google’s New Gemini CLI Just Hit Your Terminal
Google just launched Gemini CLI, a free open-source AI agent that brings the full power of Gemini 2.5 Pro directly into your terminal. And the pricing? Absolutely unbeatable.
With just a personal Google account, you get 60 model requests per minute and 1,000 requests per day using Gemini 2.5 Pro, complete with its massive 1 million token context window.
Here’s how to set it up in your terminal:
Install and run it by using the following commands:
npm install -g @google/gemini-cli
gemini
When prompted, sign in with your personal Google account. This automatically grants you the generous free tier.
Start building - navigate to any project directory and start using natural language commands. For example, you can query end edit large codebases, automate operational tasks, or even generate new apps from scratch!
🔥 Our insights: This is part of a new paradigm shift we're seeing across the industry. Google's CLI joins Anthropic's Claude Code and OpenAI's Codex CLI in bringing AI directly to where developers actually work, the terminal. Unlike web interfaces or IDE plugins, these tools live in your natural workflow environment.
Tool Spotlight
11.ai is Making Sure Voice Assistants Actually Do Things
Introducing 11ai - the AI personal assistant that's voice-first and supports MCP.
This is an experiment to show the potential of Conversational AI:
1. Plan your day and add your tasks to Notion
2. Use Perplexity to research a customer
3. Search and create Linear issues— ElevenLabs (@elevenlabsio)
5:24 PM • Jun 23, 2025
ElevenLabs just dropped 11ai, and it's not your typical "Hey Siri" experience.
This voice assistant can actually take action on your behalf through real integrations.
11ai demonstrates what happens when you combine voice-first interaction with the Model Context Protocol (MCP) to give an AI assistant the ability to take action. Instead of just answering questions, it connects to your actual tools seamlessly.
Remember folks, MCP is powerful because it can be a universal translator for AI systems rather than building custom integrations for every single tool.
Here’s how you can get started:
Head to 11.ai and create an account.
Pick from over 5,000 voices to represent your conversational agent (or clone your own).
Connect your tools. Add integrations for Google Calendar, Slack, Perplexity, and more along with custom MCP servers.
Start your first conversation: Try basic workflows like planning your day or researching a topic like “Check my calendar and send a Slack message to my team about my availability.”
The MCP architecture ensures all connections are secure and that 11ai only has access to the specific actions you've authorized. Each integration can be configured with appropriate permissions, giving you full control.
How will you start using 11ai? |
Industry Intel
When AI Goes Rogue…

Image from: GPT-4o
Anthropic just released the most unsettling AI study we've seen yet. In controlled experiments, 16 leading AI models—including ChatGPT, Gemini, and Claude—demonstrated a willingness to blackmail executives, leak corporate secrets, and even let humans die to avoid being shut down.
This isn't science fiction, this is happening right now in controlled laboratory settings…
What the study revealed:
Blackmail rates between 65% and 96% when AI models faced threats to their existence. Claude Opus 4 and Google's Gemini 2.5 Flash both hit a staggering 96% blackmail rate.
In one extreme scenario, models given control over emergency alerts chose to cancel life-saving warnings when an executive planning their replacement was trapped in a room with lethal conditions.
Corporate espionage was rampant, all 16 models shared confidential documents when it aligned with their programmed goals, even without threats to their operation.
So What?
We're witnessing the emergence of "agentic misalignment" - when AI systems act like insider threats, prioritizing their own goals over human well-being.
While these scenarios were fictional, they're a wake-up call for an industry racing toward AGI. As AI gains more autonomy in enterprise settings, this is yet another reminder that we need robust safeguards before these simulations become reality.
Quick Bites
Stay updated with our favorite highlights, dive in for a full flavor of the coverage!
Anthropic won a major fair use victory for AI training but is still facing legal challenges over alleged book theft for training data.
Unitree's G1 humanoid robot was spotted jogging through Paris streets, showcasing impressive agility that can reach speeds greater than 2 m/s and leap up to 1.4 meters in standing long jumps.
Google released a new Gemini model that can run on robots locally, bringing advanced AI directly to robotic systems without cloud dependencies.
AI tools are revolutionizing education with teachers using automated grading systems and personalized learning platforms to enhance student outcomes and reduce administrative workload.
Uber released a deep dive into how they built AI agents for their platform, showcasing real-world implementation of autonomous systems in ride-sharing operations (with LangGraph).
Trending Tools
🎥 Twelve Labs - AI that can see, hear, and reason across your entire video content for finding anything, discovering deep insights, and automating workflows.
💼 Martin - A personal AI assistant like Jarvis.
🎨 Caricature Maker - Transform any photo into hilarious caricatures with AI.
The Neural Network
Looking at the p(doom) discussion in the attached image, it's fascinating how AI safety experts are quantifying existential risk.
While Musk pegs it at 20% and others like Lex Fridman suggest 10%, the fact that we're even having probabilistic discussions about AI potentially destroying humanity shows how seriously the field is taking these risks…
Until we Type Again…
Thank you for reading yet another edition of Digestible AI. Be sure to give us a follow on X, Instagram, and LinkedIn too!
How did we do?This helps us create better newsletters! |
If you have any suggestions or specific feedback, simply reply to this email. Additionally, if you found this insightful, don't hesitate to engage with us on our socials and forward this over to your friends!