Artificial Intelligence Startups Attracted 25% Of Europe’s Venture Capital Funding
Venture funding into Europe is heading for a flat year, but this may obfuscate the fact that European AI startups are thriving.
According to VC firm Balderton Capital and Dealroom, 25% of VC funding into the region — approximately $13.7 billion — went to AI startups this year, compared to 15% four years ago, resulting in several new unicorns, such as Poolside and Wayve.
For Balderton Capital general partner James Wise, the most important takeaway is that “you can raise hundreds million euros, even billions euros, as a very early-stage AI company if you’ve got a breakthrough technology in Europe, just as you can in the U.S.”
This counters what he sees as a “relatively negative narrative” around Europe: Collectively, European AI companies have doubled in value in just four years, reaching $508 billion. Per these new figures, this category now represents nearly 15% of the entire tech sector in value, up from 12% three years ago.
This means there’s funding available to AI startups, whether at early or at later stages, although it may not always come from Europe itself. In addition, American AI companies also see Europe as a talent pool to tap into.
“We’re still probably a derivative of the U.S. market, we’re still reliant on it, but it’s not like nothing’s happening here. It’s actually a really buoyant ecosystem,” Wise said.
More on AI Funding in Europe on Tech Crunch
Artificial Intelligence Trends For 2025
The world of AI is changing and changing quickly. Martin Keen, Master Inventor is here to help set some expectations for what in AI in 2025.
Will Large Language Models (LLM) get bigger? Smaller? Both? What's in store of AI Agents? Will AI finally be able to remember everything?
All of this and more speculation of what 2025 will hold in store.
OpenAI Just Gave ChatGPT Plus Users Unlimited Access To Sora — But There's A Catch
OpenAI is giving ChatGPT Plus subscribers unlimited access to Sora over the holidays. This is only for the "relaxed queue", so video generations will take a little longer but it’s a chance to see just what the AI video generator can really achieve.
Normally, subscribers of the $20 per month plan get 50 video generations per month with no mechanism for increasing that other than paying $200 per month for the ChatGPT Pro plan — which most users don't need.
CEO Sam Altman wrote on X: "Our GPUs get a little less busy during late December as people take a break from work, so we are giving all plus users unlimited Sora access via the relaxed queue over the holidays!"
Other limits on the Plus plan still apply including video resolution being limited to 480p if the clip is 10 seconds long, or 720p for a 5-second video. Plus subscribers also can't use Sora to animate images of people, real or AI-generated.
More about OpenAI relaxed restrictions on Sora on Tom’s Guide
New Playbook | AI-Powered Acquisitions
What if small businesses could harness AI to transform their operations and fuel a new wave of growth through strategic acquisitions? Joe Schmidt, Partner at a16z, shares his 2025 Big Idea: Romanticizing Inorganic Growth.
By integrating AI and automation into traditional services companies, Joe envisions a model where businesses not only improve efficiency but also expand by acquiring and enhancing other businesses.
Learn how AI-powered automation is reshaping industries like insurance, healthcare, and freight. Whether you're a founder, investor, or curious listener, this episode offers an exciting glimpse into industries that were previously reserved for private equity.
Learn How GE Healthcare Used AWS To Build A New Artificial Intelligence Model That Interprets MRIs
MRI images are understandably complex and data-heavy. Because of this, developers training large language models (LLMs) for MRI analysis have had to slice captured images into 2D. But this results in just an approximation of the original image, thus limiting the model’s ability to analyze intricate anatomical structures. This creates challenges in complex cases involving brain tumors, skeletal disorders or cardiovascular diseases.
But GE Healthcare appears to have overcome this massive hurdle, introducing the industry’s first full-body 3D MRI research foundation model (FM) at this year’s AWS re:Invent. For the first time, models can use full 3D images of the entire body.
GE Healthcare’s FM was built on AWS from the ground up — there are very few models specifically designed for medical imaging like MRIs — and is based on more than 173,000 images from over 19,000 studies. Developers say they have been able to train the model with five times less compute than previously required.
GE Healthcare has not yet commercialized the foundation model; it is still in an evolutionary research phase. An early evaluator, Mass General Brigham, is set to begin experimenting with it soon.
“Our vision is to put these models into the hands of technical teams working in healthcare systems, giving them powerful tools for developing research and clinical applications faster, and also more cost-effectively,” GE HealthCare chief AI officer Parry Bhatia told VentureBeat.
More on GE Healthcare’s AI analysis of MRI’s on VentureBeat
Manohar Paluri | How Open Should AI Be?
The latest version of Llama, Meta's large language model, underscores the company's argument for open-source development. Just how open should the foundations of Artificial Intelligence be?
The executive behind building Llama joins us to answer the question and share where Meta might go as it builds toward artificial general intelligence. Manohar Paluri, Vice President, Engineering for Gen AI, Meta Interviewer: Jason Del Rey, Fortune
Meta's Ray-Bans New Live AI And Translation, Hands-On: Signs Of AR Glasses to Come
I activated Meta Ray-Bans' new live AI feature and took a morning walk across Manhattan. It was a strange experience. A white LED in the corner of my eyes stayed on as my glasses kept a feed of my life. I awkwardly asked questions: about the pigeons, about the construction workers, about whether it knew what car was nearby or who owned those trucks across the street. I got mixed answers, sometimes no answer at all. And then my connection ended because of bad Bluetooth in the city.
My first steps with an always-aware AI companion have been weird and even more science-fictiony than what I'd experienced over the last year. Much like a recent demo with Google's always-on Gemini-powered glasses, Meta's Ray-Bans -- which are already very much available -- are taking next steps to being something like an always-aware assistant. Or agent, as the AI landscape is calling it now. Live AI and live translation; once on, stay on. It's assumed that AI can see what you see. And maybe it'll help you do something you don't know how to do.
But these features also look like previews of what could be a whole new set of Meta glasses coming next year, ones that could have their own display and maybe even a gesture-controlling wristband too, based on hints Mark Zuckerberg gave on Threads last week after a story written by The Wall Street Journal's Joanna Stern.
At the moment, Live AI feels like an odd glimpse of a more always-on and more intrusive AI future, that's more of a companion than a helper from my very early attempts. And yet, translation, when it works, feels surprisingly helpful... even if it operates at a bit of a delay.
More on Meta’s live AI, translation and music discovery on CNET
AI Native 2024 – Cerebras CTO Sean Lie
Thats all for today, however new advancements, investments, and partnerships are happening as you read this. AI is moving fast, subscribe today. Happy Holidays!