Google DeepMind Unveils Veo 2, A New AI Video Model To Rival OpenAI's Sora
Google DeepMind, Google’s flagship AI research lab, wants to beat OpenAI at the video-generation game — and it might just, at least for a little while.
On Monday, DeepMind announced Veo 2, a next-gen video-generating AI and the successor to Veo, which powers a growing number of products across Google’s portfolio. Veo 2 can create two-minute-plus clips in resolutions up to 4k (4096 x 2160 pixels).
Notably, that’s 4x the resolution — and over 6x the duration — OpenAI’s Sora can achieve.
It’s a theoretical advantage for now, granted. In Google’s experimental video creation tool, VideoFX, where Veo 2 is now exclusively available, videos are capped at 720p and eight seconds in length. (Sora can produce up to 1080p, 20-second-long clips.)
VideoFX is behind a waitlist, but Google says it’s expanding the number of users who can access it this week. Eli Collins, VP of product at DeepMind, also told TechCrunch that Google will make Veo 2 available via its Vertex AI developer platform “as the model becomes ready for use at scale.”
“Over the coming months, we’ll continue to iterate based on feedback from users,” Collins said, “and [we’ll] look to integrate Veo 2’s updated capabilities into compelling use cases across the Google ecosystem … [W]e expect to share more updates next year.”
More on DeepMind’s Veo 2 AI video generation tools on TechCrunch
RAG vs. Fine Tuning | Cedric Clyburn | IBM
Join Cedric Clyburn as he explores the differences and use cases of Retrieval Augmented Generation (RAG) and fine-tuning in enhancing large language models.
This video covers the strengths, weaknesses, and common applications of both techniques, and provides insights on how to choose between them using machine learning and natural language processing principles.
ChatGPT’s AI Search Engine Is Rolling Out To Everyone
OpenAI has also made some improvements to ChatGPT search on mobile.
ChatGPT’s AI search engine is rolling out to all users starting today. OpenAI announced the news as part of its newest 12 days of ship-mas livestream, while also revealing an “optimized” version of the feature on mobile and the ability to search with advanced voice mode.
ChatGPT’s search engine first rolled out to paid subscribers in October. It will now be available at the free tier, though you have to have an account and be logged in.
One of the improvements for search on mobile makes ChatGPT look more like a traditional search engine. When looking for a particular location, like restaurants or local attractions, ChatGPT will display a list of results with accompanying images, ratings, and hours. Clicking on a location will pull up more information about the spot, and you can also view a map with directions from directly within the app.
More on OpenAI’s new search engine on The Verge
Laying Down the AI Infrastructure
Accel partner Matt Weigand discusses infrastructure investments into AI and the eventual maturity of AI agents. He joins Caroline Hyde on "Bloomberg Technology."
I have partnered with the Logictry AI platform that helps leaders make smarter decisions faster. Check out the case study on how National Instruments utilized the Logictry platform to enable their sales team, distributors and partners with the information they need, when they need it — in the sales process and beyond.
Review the National Instruments case study and take the self-led demo on the Logictry website. If you’re interested in more information or to get started contact me.
Meta Rolls Out Live AI, Translations, And Shazam To Its Smart Glasses With 11.0
Shazam will be available for everyone, while you’ll need to be in the Early Access Program for the live AI and live translations. (Editor: Which I am)
Meta just announced three new features are rolling out to its Ray-Ban smart glasses: live AI, live translations, and Shazam. Both live AI and live translation are limited to members of Meta’s Early Access Program, while Shazam support is available for all users in the US and Canada.
Both live AI and live translation were first teased at Meta Connect 2024 earlier this year. Live AI allows you to naturally converse with Meta’s AI assistant while it continuously views your surroundings. For example, if you’re perusing the produce section at a grocery store, you’ll theoretically be able to ask Meta’s AI to suggest some recipes based on the ingredients you’re looking at. Meta says users will be able to use the live AI feature for roughly 30 minutes at a time on a full charge.
Meanwhile, live translation allows the glasses to translate speech in real-time between English and Spanish, French, or Italian. You can choose to either hear translations through the glasses themselves, or view transcripts on your phone. You do have to download language pairs beforehand, as well as specify what language you speak versus what your conversation partner speaks.
More on Meta’s AI smart glasses updates on The Verge
When Regulation Becomes Code | a16z
Regulations are at an all-time high, with over 50,000 federal banking rules driving up costs for businesses and stifling innovation.
Angela Strange, General Partner at a16z, explores how AI is revolutionizing compliance by turning regulation into code—cutting costs, simplifying workflows, and empowering startups to compete.
In this episode, she unveils her 2025 Big Idea, highlighting the transformative impact on businesses, consumers, and the economy.
YouTube Is Letting Creators Opt In To Allowing Third-Party AI Training
But you still can’t tell Google not to train its AI on your videos.
YouTube is rolling out a way for creators to let third-party companies use their videos to train AI models. To be clear, the default setting for this is off, meaning that if you don’t want to let third-party companies scrape your videos for AI training, you don’t have to do anything. But if, for some reason, you do want to allow that — Google says that “some creators and rights holders” may want to — it’s going to be an option.
“We see this as an important first step in supporting creators and helping them realize new value for their YouTube content in the AI era,” a TeamYouTube staffer named Rob says in a support post. “As we gather feedback, we’ll continue to explore features that facilitate new forms of collaboration between creators and third-party companies, including options for authorized methods to access content.”
YouTube will be rolling out the setting in YouTube Studio “over the next few days,” and unauthorized scraping “remains prohibited,” Rob writes
More about YouTube letting creators opt-in for AI training
Top Strategic Tech Trends For 2025 | Gene Alvarez: Gartner IT Symposium/Xpo
Amidst the challenges of navigating current social and economic disruptions and trends, future success requires CIOs and other IT leaders to look ahead.
Gartner’s Top Strategic Technology Trends for 2025 are the star map you can use to keep your organization forging safely into the future.
Each trending technology represents powerful new tools to vanquish obstacles to productivity, security and innovation.
In his session from Gartner IT Symposium/Xpo, Gartner Distinguished VP Analyst Gene Alvarez highlights how CIOs and other IT executives can explore and selectively use these trends to drive success.
Thats all for today, however new advancements, investments, and partnerships are happening as you read this. AI is moving fast, subscribe today to stay informed.