The Next Great Leap In AI Is Behind Schedule And Crazy Expensive
OpenAI has run into problem after problem on its new artificial-intelligence project, code-named Orion
OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough.
The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months and is intended to be a major advancement in the technology that powers ChatGPT. OpenAI’s closest partner and largest investor, Microsoft, had expected to see the new model around mid-2024, say people with knowledge of the matter.
OpenAI has conducted at least two large training runs, each of which entails months of crunching huge amounts of data, with the goal of making Orion smarter. Each time, new problems arose and the software fell short of the results researchers were hoping for, people close to the project say.
At best, they say, Orion performs better than OpenAI’s current offerings, but hasn’t advanced enough to justify the enormous cost of keeping the new model running. A six-month training run can cost around half a billion dollars in computing costs alone, based on public and private estimates of various aspects of the training.
OpenAI and its brash chief executive, Sam Altman, sent shock waves through Silicon Valley with ChatGPT’s launch two years ago. AI promised to continually exhibit dramatic improvements and permeate nearly all aspects of our lives. Tech giants could spend $1 trillion on AI projects in the coming years, analysts predict.
T he weight of those expectations falls mostly on OpenAI, the company at ground zero of the AI boom. The $157 billion valuation investors gave OpenAI in October is premised in large part on Altman’s prediction that GPT-5 will represent a “significant leap forward” in all kinds of subjects and tasks.
More on OpenAI’s Orion project on The Wall Street Journal
CNET: First Look | OpenAI's Sora Artificial Intelligence Video Generator
CNET’s Stephen Beacham tried OpenAI's Sora AI Video Generator and was blown away by the results but underwhelmed by the limited number of high-resolution video generations and durations. Check out all the Sora AI video features and capabilities.
How SLMs Can Beat Their Bigger, Resource-Intensive Cousins
Two years on from the public release of ChatGPT, conversations about AI are inescapable as companies across every industry look to harness large language models (LLMs) to transform their business processes. Yet, as powerful and promising as LLMs are, many business and IT leaders have come to over-rely on them and to overlook their limitations. This is why I anticipate a future where specialized language models, or SLMs, will play a bigger, complementary role in enterprise IT.
SLMs are more typically referred to as “small language models” because they require less data and training time and are “more streamlined versions of LLMs.” But I prefer the word “specialized” because it better conveys the ability of these purpose-built solutions to perform highly specialized work with greater accuracy, consistency and transparency than LLMs. By supplementing LLMs with SLMs, organizations can create solutions that take advantage of each model’s strengths.
Trust and the LLM ‘black box’ problem
LLMs are incredibly powerful, yet they are also known for sometimes “losing the plot,” or offering outputs that veer off course due to their generalist training and massive data sets. That tendency is made more problematic by the fact that OpenAI’s ChatGPT and other LLMs are essentially “black boxes” that don’t reveal how they arrive at an answer.
This black box problem is going to become a bigger issue going forward, particularly for companies and business-critical applications where accuracy, consistency and compliance are paramount. Think healthcare, financial services and legal as prime examples of professions where inaccurate answers can have huge financial consequences and even life-or-death repercussions. Regulatory bodies are already taking notice and will likely begin to demand explainable AI solutions, especially in industries that rely on data privacy and accuracy.
While businesses often deploy a “human-in-the-loop” approach to mitigate these issues, an over-reliance on LLMs can lead to a false sense of security. Over time, complacency can set in and mistakes can slip through undetected.
More about the benefits of small language models on VentureBeat
A Conversation With Our Co-Founders
The co-founders of Anthropic discuss the past, present, and future of Anthropic. From left to right: Chris Olah, Jack Clark, Daniela Amodei, Sam McCandlish, Tom Brown, Dario Amodei, and Jared Kaplan.
3 Steps To Include Artificial Intelligence In Your Future Strategic Plans
It's nearly impossible to discuss business strategy or future trends without mentioning how AI and GenAI are transforming businesses and workplaces.
Leaders recognize the technologys’ potential to drive breakthrough business performance. And the numbers speak for themselves—McKinsey's latest research estimates that generative AI could contribute between $2.6 trillion and $4.4 trillion annually across the 63 use cases analyzed.
Consequently, many organizations are including a focus on AI in their strategic plans, viewing it as essential for driving key priorities, strategic decisions, and operational and customer impact.
As AI's potential continues to evolve, how prominently should it feature in an organization's three to five year business strategy? What key factors should leaders consider when integrating AI into the organization’s long-term strategic plans?
Understand AI Maturity
Before incorporating AI as a key strategic investment in their strategic plan, leaders must conduct a comprehensive review or audit of their organization's AI maturity.
According to Gallup research, many employees who use AI at least once a year report that their organizations have not provided any training on using AI at work. Only 6% of employees feel very comfortable using AI in their roles, while about one in six employees (16%) are very or somewhat comfortable using AI. Meanwhile, about a third of employees (32%) say they are very uncomfortable using AI in their roles.
More on including AI in your future strategic plans on Forbes
AI: The Ultimate Healthcare Hire
The healthcare system is at a breaking point, with clinician shortages, aging doctors, and increasing burnout pushing the industry to its limits. Julie Yoo, General Partner at a16z introduces her 2025 Big Idea: Super Staffing—a transformative approach leveraging AI to augment clinical capacity.
We discuss how AI-powered co-pilots and autonomous agents are reshaping healthcare workflows, reducing administrative burdens, and addressing the growing supply-demand mismatch. With insights into rapid technology adoption and innovative models like asynchronous care, this episode unpacks how AI can unlock the latent potential of the healthcare workforce and improve patient outcomes.
Instagram Teases AI Editing Tools That Will Completely Reimagine Your Videos
The feature uses Meta’s Movie Gen AI model and is set to arrive next year.
Instagram is planning to introduce a generative AI editing feature next year that will allow users to “change nearly any aspect of your videos.” The tech is powered by Meta’s Movie Gen AI model according to a teaser posted by Instagram head Adam Mosseri, and aims to provide creators with more tools to help transform their content and bring their ideas to life without extensive video editing or manipulation skills.
Mosseri says the feature can make adjustments using a “simple text prompt.” The announcement video includes previews of early research AI models that change Mosseri’s outfit, background environments, and even his overall appearance — in one scene transforming him into a felt puppet. Other changes are more subtle, such as adding new objects to the existing background or a gold chain around Mosseri’s neck without altering the rest of his clothing.
It’s an impressive preview. The inserted backgrounds and clothing don’t distort unnaturally when Mosseri rapidly moves his arms or face, but the snippets we get to see are barely a second long. The early previews of OpenAI’s Sora video model also looked extremely polished, however, and the results we’ve seen since it became available to the public haven’t lived up to those expectations. We won’t know how good Instagram’s AI video tools truly are by comparison until they launch.
More on Instagram’s Movie Gen AI editing tools on The Verge
Hands On: Veo 2 | AI Video Generation
In this video, I dive into Google’s newly released Veo 2 AI video generation model, just one week after the arrival of another major player in the AI video scene.
How does Veo-2 measure up, and is it really the new king of AI video?
I put it to the test with a series of prompts—from photorealistic island crash landings to eerie ’80s horror vibes, gritty sci-fi settings, and beyond. I’ll share insights into the UI, show off early outputs, and offer tips on prompting for better results.
As this is an early-access look, the model still has quirks and limitations, but the leaps in video realism, character physics, and scene composition are undeniably impressive.
Thats all for today, however new advancements, investments, and partnerships are happening as you read this. AI is moving fast, subscribe today to stay informed.