OpenAI’s o1 Model Leaked on Friday and It Is Wild — Here’s What Happened
OpenAI is set to release the full o1 reasoning model sometime this year, but an unexpected leak last week means we may have already seen it in action — and it is even better than we expected.
In September OpenAI unveiled a new type of AI model that takes time to reason through a problem before responding. This was added to ChatGPT in the form of o1-preview and o1-mini, neither of which demonstrated the full capabilities of the final o1 model, but did show a major improvement in terms of accuracy over GPT-4.
CEO Sam Altman says o1 is a divergence from the GPT-style models normally released, including GPT-4o, which powers Advanced Voice. During a briefing with OpenAI, I’ve been told o1 full is a significant improvement over the preview, and the leak seems to confirm that is the case.
Over about two hours on Friday users could access what is thought to be the full version of o1 (OpenAI has not confirmed) by changing a parameter in the URL. The new model will also be able to analyze images and access tools like web search and data analysis.
More about what was revealed in the OpenAI o1 leak on Tom’s Guide
A First Look at the Advanced Camera Controls in Runway's Gen-3 Alpha Turbo
Advanced Camera Control is now available for Runway's Gen-3 Alpha Turbo. The new feature lets you control both the direction and intensity of camera movements in AI-generated videos, giving you more precise control over how scenes unfold.
AI That Can Invent AI Is Coming. Buckle Up.
Leopold Aschenbrenner’s “Situational Awareness” manifesto made waves when it was published this summer. In this provocative essay, Aschenbrenner—a 22-year-old wunderkind and former OpenAI researcher—argues that artificial general intelligence (AGI) will be here by 2027, that artificial intelligence will consume 20% of all U.S. electricity by 2029, and that AI will unleash untold powers of destruction that within years will reshape the world geopolitical order.
Aschenbrenner’s startling thesis about exponentially accelerating AI progress rests on one core premise: that AI will soon become powerful enough to carry out AI research itself, leading to recursive self-improvement and runaway superintelligence.
The idea of an “intelligence explosion” fueled by self-improving AI is not new. From Nick Bostrom’s seminal 2014 book Superintelligence to the popular film Her, this concept has long figured prominently in discourse about the long-term future of AI.
But—though few people have yet noticed—this concept is in fact starting to get more real. At the frontiers of AI science, researchers have begun making tangible progress toward building AI systems that can themselves build better AI systems.
More about AI systems that can build better AI systems on Forbes
Touch Perception at Meta FAIR
Meta is bringing the sense of touch to AI through a series of breakthrough developments that could transform everything from online shopping to prosthetic limbs. The company announced today it's partnering with GelSight and Wonik Robotics to commercialize advanced tactile sensing technology that processes touch information 30 times faster than humans.
Apple Intelligence Will Help AI Become As Commonplace As Word Processing
When Apple’s version of AI, branded as Apple Intelligence, rolls out in October to folks with the company’s latest hardware, the response is likely to be a mix of delight and disappointment.
The AI capabilities on their way to Apple’s walled-garden will bring helpful new features, such as textual summaries in email, Messages and Safari; image creation; and a more context-aware version of Siri.
But as Apple Intelligence’s beta testing has already made clear, the power of these features falls well below what is on offer from major players like OpenAI, Google, and Meta. Apple AI won’t come close to the quality of document summary, image or audio generation easily accessed from any of the frontier models.
But Apple Intelligence will do something none of the flagship offerings can do: change perceptions of AI and its role in ordinary life for a large portion of users around the world.
The real impact of Apple AI won’t be practical but moral. It will normalize AI, make it seem less foreign or complex. It will de-associate AI from the idea of cheating or cutting corners. It will help a critical mass of users cross a threshold of doubt or mystification about AI to forge a level of comfort and acceptance of it, even a degree of reliance.
Read more about Apple normalizing AI on TNW
Inside the xAI Supercluster Colossus
We finally get to show the largest AI supercomputer in the world, xAI Colossus. This is the 100,000 (at the time we filmed this) GPU cluster in Memphis Tennessee that has been on the news a lot.
This video has been five months in the making, and finally Elon Musk gave us the green light to not just film, but also show everyone the Supermicro side of the cluster.
Generative AI Can Reproduce An Image When Trained On As Few As 200 Copies
New research highlights just how eerily artificial intelligence can re-create images based on its training data. AI trained on as little as 200 images can provide passable imitations of popular artworks, according to a new study published in Cornell University’s preprint server arXiv—highlighting just how easy it can be for AI systems to mimic copyrighted work.
“Some people are surprised that it’s such a low number, and some people are also surprised that it’s a high number,” says Sahil Verma, lead author of the study and a computer science PhD at the University of Washington. Verma and his colleagues analyzed three versions of the Stable Diffusion model, and the extent to which they were able to produce images that would be considered imitations of originals. The so-called imitation threshold was calculated algorithmically, based on whether a computer system recognized an image as imitative. The computerized results were also cross-checked with humans, and found a strong correlation.
The actual total of images an AI model needs to have within its training data varies depending on the system, but is between 200 and 600 images. It also depends on what the AI is trying to depict: those looking to mimic the brushstrokes of Vincent Van Gogh might need as few as 112 images, while human faces can be replicated using as little as 234 images.
Read more about AI image reproduction on Fast Company
Why AI Is NOT The Manhattan Project
The future of AI is being written NOW. Will we repeat the mistakes of the past or create something better? In this episode, Verity Harding (Google DeepMind, Cambridge's AI & Geopolitics Project) argues that we all need to be involved in shaping the future of AI and uses the history of other technological advances to make her case. Watch to find out what's at stake and how you can make a difference!
Thats all for today, however new advancements, investments, and partnerships are happening as you read this. Subscribe today, so you don’t miss any AI related news.