OpenAI's o3-mini Now Lets You See The AI's Thought Process
This OpenAI update is available to free and paid users and could make getting the results you want easier.
OpenAI released its o3-mini model exactly one week ago, offering both free and paid users a more accurate, faster, and cheaper alternative to o1-mini. Now, OpenAI has updated the o3-mini to include an updated chain of thought.
OpenAI announced via an X post that free and paid users would now be able to view the reasoning process the o3-mini goes through before arriving at a conclusion. For example, in the post, a user asked, How is today not a Friday? and under the dropdown showing how long it took, the model delineated every step in its chain of thought that allowed it to land on its answer.
Understanding how the model arrived at the conclusion is helpful because it not only helps users verify the accuracy of the conclusion, but it also teaches users how they could have arrived at that answer themselves. This is particularly useful for math or coding prompts, in which seeing the steps could allow you to recreate them the next time you encounter a similar problem.
Paid ChatGPT subscribers will also be able to see the updated chain of thought in o3-mini in the high reasoning effort. As the name implies, high reasoning just allows the model to apply more compute power for more advanced questions that require higher reasoning. In the X post announcing the feature, OpenAI throws out the term Chain of Thought (CoT), but what does it actually mean?
In the same way you would ask a person to explain their reasoning step by step, CoT prompting encourages an LLM to break down a complex problem into logical, smaller, and solvable steps. By sharing these reasoning steps with users, the model becomes more interpretable, allowing users to better steer its responses and identify errors in reasoning.
More on OpenAI’s Chain of Thought reasoning on ZDNET
AI: The Biggest Tech Infrastructure Buildout In History | AI House Davos
It is no secret that AI is the defining trend of our generation, but what does that actually mean? Very soon AI will be deeply pervasive in our lives and the backbone of the industry is being built today.
Moderator Shirin Ghaffary (Reporter, Bloomberg News) leads a expert panel which includes; Chase Lochmiller (Crusoe, CEO) Costi Perricos (Deloitte, Global GenAI Business Leader) Varun Mohan (Codeium, Co-Founder and CEO) that ask, how are we building the infrastructure to support this massive global technological revolution?
What do some of the global trends look like in terms of scaling both infrastructure and adoption? Hear from experts across the value chain from building data centers to cloud platforms to AI products to understand how this massive scale up in infrastructure and offerings is furthering innovation and business while also mitigating societal, economic and environmental risks of scaling AI.
Bruce Burke Participating In Forbes Entrepreneur Of Impact Competition
VOTING IS NOW OPEN - I HAVE TO ASK - PLEASE VOTE FOR BRUCE BURKE
Exciting News! Neural News Network editor Bruce Burke has been selected to participate in the Entrepreneur of Impact competition. One visionary winner will be featured in Forbes, receive $25,000, and have a one-on-one mentoring session with the Shark Tank's own Daymond John. Please vote for your Neural News Network editor!
CLICK HERE TO VOTE FOR BRUCE BURKE IN ENTREPRENEUR OF IMPACT
I'm proposing building an AI-powered, fully automated news and information organization that creates news articles, videos, podcasts, deep dives, special reports, white papers, and more — focused on the ever-expanding world of AI.
I’m currently in 12th place, I have advanced to the top 15 with only four days to go!
I would appreciate your vote and will be posting again when voting starts. I have setup my profile that outlines my proposal linked below. PLESE VOTE TODAY!
CLICK HERE TO VOTE FOR BRUCE BURKE IN ENTREPRENEUR OF IMPACT
Hugging Face Brings ‘Pi-Zero’ To LeRobot, Making AI-Powered Robots Easier To Build And Deploy
Hugging Face and Physical Intelligence have quietly launched Pi0 (Pi-Zero) this week, the first foundational model for robots that translates natural language commands directly into physical actions.
“Pi0 is the most advanced vision language action model,” Remi Cadene, a principal research scientist at Hugging Face, announced in an X post that quickly gained attention across the AI community. “It takes natural language commands as input and directly outputs autonomous behavior.”
This release marks a pivotal moment in robotics: The first time a foundation model for robots has been made widely available through an open-source platform. Much like ChatGPT revolutionized text generation, Pi0 aims to transform how robots learn and execute tasks.
Pi0 brings ChatGPT-style learning to robotics, unlocking complex tasks
The model, originally developed by Physical Intelligence and now ported to Hugging Face’s LeRobot platform, can perform complex tasks like folding laundry, bussing tables and packing groceries — activities that have traditionally been extremely challenging for robots to master.
“Today’s robots are narrow specialists, programmed for repetitive motions in choreographed settings,” the Physical Intelligence research team wrote in their announcement post. “Pi0 changes that, allowing robots to learn and follow user instructions, making programming as simple as telling the robot what you want done.”
The technology behind Pi0 represents a significant technical achievement. The model was trained on data from seven different robotic platforms and 68 unique tasks, enabling it to handle everything from delicate manipulation tasks to complex multi-step procedures. It employs a novel technique called flow matching to produce smooth, real-time action trajectories at 50Hz, making it highly precise and adaptable for real-world deployment.
More on Hugging Face and Physical Intelligence’s Pi-Zero on VentureBeat
What If AI Could Spot Your Lies? | TED
Humans are terrible at detecting lies, says psychologist Riccardo Loconte ... but what if we had an AI-powered tool to help? He introduces his team’s work successfully training an AI to recognize falsehoods in certain contexts, laying the groundwork for a world where everything from national security to social media is a little bit safer — and a bit more ethically complicated. Recorded at TEDAI Vienna on October 19, 2024.
Moving On IT | Authorized Partner For IT, AI, And Cybersecurity Solutions
I’ve partnered with Moving On IT, your authorized partner for navigating the complex landscape of today’s technology. Moving On IT specializes in providing cutting-edge hardware, software, and cybersecurity solutions tailored to your needs.
From robust IT infrastructure to advanced Al applications, Moving On IT empowers businesses to thrive in the digital age. Contact Moving on IT with all your IT, AI and Cybersecurity requirements. Call +1 (727) 490-9418, or email: info@movingonit.com
Check out Moving On IT’s new press release on Cybersecurity Dive | CLICK HERE
This Pixar-Inspired Robot Lamp Is The First Apple Intelligence Smart Device
For those who love Pixar's Luxo Jr., Apple's engineers developed a lamp that swivels with excitement as it obeys your every whim.
Thirty nine years ago, the CG wizards at Pixar made us all believe a faceless desk lamp could be enormously expressive and incredibly cute. Apple, with its mind set on home robotics, shows us how such an adorable lamp would look like in real life.
The tech giant has been working on a lamp that’s a little goofy while it tries to respond to your requests, and it may be the one Apple Intelligence-enabled device I want in my life—more than any AI assistant on my iPhone.
Apple’s Machine Learning Research division posted a relatively short research paper to Arxiv preprint repository last month detailing its “expressive and functional movement design for non-anthropomorphic robot.” MacRumors spotted the article and uploaded a YouTube video of the expressive lamp in action.
It’s a device that’s immediately reminiscent of Pixar’s mascot Luxo Jr., and it’s somehow just as cute. Engineers gestured to get the lamp to move forward or look in a particular direction. Rather than simply moving linearly, the lamp acted equal parts confused and curious, with various states of “attention,” “attitude,” and “expression,” according to the paper. Apple calls this framework ELEGNT, a clumsy acronym for “expressive and functional movement design for non-anthropomorphic robot.”
You know what, Apple may be on the money here. The expressive robot is far more entertaining than one that merely does what you tell it to. In one highlight, the robot arm tried to extend to look at a note that its arm couldn’t reach, before shaking its head in dejection and apologizing with an AI-generated voice.
More on Apple’s ELEGNT framework on Gizmodo
Mo Gawdat | The Future Of Al And How It Will Shape Our World | Scott Galloway
Mo Gawdat, the former Chief Business Officer for Google X, bestselling author, the founder of ‘One Billion Happy’ foundation, and co-founder of ‘Unstressable,’ joins Scott to discuss the state of AI — where it stands today, how it’s evolving, and what that means for our future. They also get into Mo’s latest book, Unstressable: A Practical Guide to Stress-Free Living, on the Prof G podcast.
Wayne Rasanen’s Award Winning DecaTxt 3 | A One-Handed Keyboard
Use Discount Code NEURAL For A $15 Savings on DecaTxt 3, with FREE Shipping!
The DecaTxt 3 uses a unique "chord" system, similar to a piano. By pressing different combinations of the two keys at each fingertip, you can generate any letter or symbol.
Plus, with a single key press or a combination with the thumb keys, you can access the entire alphabet. This makes learning, using, and mastering the DecaTxt 3 a breeze.
Click here to read more about Wayne Rasanen’s DecaTxt 3, BT one-handed keyboard.
The DecaTxt 3 is a perfect solution for people with hand tremors, poor motor skills, conditions like MS, limb loss, or even vision impairment. It connects via Bluetooth and can be strapped to either hand, making it comfortable and versatile for everyone.
The new 55th Annual R&D Award Winner, DecaTxt 3 will be featured in an upcoming issue of the Florida Alliance for Assistive Services & Technology (FAAST) Newsletter.
Contact Wayne Rasanen, Founder of IN10DID, for more information on DecaTxt 3
One Year Later, The Rabbit R1 Is Actually Good Now — Here's Why
Wait! The R1 is kind of great now!?
If you’re reading this, chances are you already know the journey the Rabbit R1 has been on — hopping high with expectations built at CES, and then tumbling down the rabbit hole after launch.
“Avoid this AI gadget,” Editor-in-chief Mark Spoonauer wrote in his Rabbit R1 review, and in those early days, it was hard to disagree. This was a barely-half-finished box that was slow, unreliable and inaccurate in what it was supposed to do.
But 12 months have passed. Where is the Rabbit R1 now? Well with a relentless pipeline of updates and novel AI ideas…it’s actually pretty good now!?
It’s not the breakthrough device that CEO and Founder Jesse Lyu promised on-stage all those months ago. But with the Large Action Model (LAM) in full swing, Generative UI, Magic Voice, r-cade customizability and everything in-between, this is now one of the more fun ways to interact with AI that I’ve used. So before we go any further, let’s jump back to Spoonauer’s review and go through the checklist of cons that warranted that 1.5-star review, and see whether they’ve been fixed.
More on the Rabbit’s updated Large Action Model on Tom’s Guide
Dialogue At UTokyo GlobE | CEO Sam Altman And CPO Kevin Weil, OpenAI
On Monday, February 3, 2025, Dialogue at UTokyo GlobE #14 held an event with Mr. Sam Altman (CEO of OpenAI) and their CPO (Chief Product Officer), Mr. Kevin Weil as part of its “Dialogue” series. President Teruo Fujii and Executive Vice President Kaori Hayashi welcomed the two guests, along with 36 students whose major ranged from engineering to medicine to philosophy.
Professor Yujin Yaguchi, Director of the Center for Global Education (GlobE), served as the moderator. At the beginning of the event, the Inami/Monnai Laboratory of the University of Tokyo's Research Center for Advanced Science and Technology showed and explained the "JIZAI ARMS" as an example of many exciting research activities taking place at the university.
Afterwards, President Fujii and EVP Hayashi had a brief discussion on the use and future of AI with Mr. Altman and Mr. Weil, followed by a question-and-answer session between the speakers and participants.
Thats all for today, but AI is moving fast - like, comment, and subscribe for more AI news! Please vote for me in the Entrepreneur of Impact Competition today! Thank you for supporting my partners and I — it’s how I keep Neural News Network free.