Meet Devin | Cognition's AI Tool That Could Change Software Engineering Forever
Backed by $200 million in funding, Scott Wu and his team at Cognition are building an AI tool that could potentially disintegrate the whole industry, at a $2 Billion valuation.
Just before Christmas in 2023, the small team at Cognition was struggling to set up a particularly complex data server for the San Francisco–based AI startup’s fledgling coding assistant, Devin. They’d spent hours poring over installation documents and trying different commands but just couldn’t get it to work. Tired and frustrated, they decided to see how Devin would handle it.
As the AI sprung into action, it befuddled its creators. It ran the most witch-craft, black-magic-looking commands, cofounder and head of product Walden Yan, 21, recalls. For a time, it seemed Devin wouldn’t do any better than they had. Then a server terminal light that had been red for hours turned green. The data server was up and running.
Devin had deleted a faulty system file the team had overlooked, they realized. That was the moment it really hit me how much software engineering is going to change, Yan says. It was the first major task Devin ever completed, and proof of concept for Cognition’s vision of AI taking the grunt work out of coding. Now, almost a year later, Devin is handling basic engineering jobs—spotting and fixing bugs, updating chunks of code and migrating them between platforms. Give it a simple prompt—clean up this codebase—and it creates a plan of action and executes it. Most times, it works.
It’s a different approach from other better-known and bigger players in the still-burgeoning field, like Github (which Microsoft bought for $7.5 billion in 2018) and $1.3 billion–valued Codeium, both of which provide digital assistants that help people write code with AI-powered suggestions. But Devin is an autonomous AI agent that, in theory, writes the code itself—no people involved—and can complete entire projects typically assigned to developers (the name Devin comes from dev, an abbreviation for the term). What we saw is a real opportunity, says Scott Wu, 28, Cognition’s cofounder and CEO, to move from text completion to task completion.
AI-generated code is already beginning to reshape the industry. In October, Google CEO Sundar Pichai said more than a quarter of new code at the tech giant is written by AI. At Github, which hit a $2 billion annual run rate in 2024, its code completion tool has accounted for 40% of revenue growth this year, Microsoft CEO Satya Nadella said in July. Pitchbook analyst Brendan Burke says AI coding has become the most-funded use case in generative AI, with startups focused on it raising over $1 billion in the first half of 2024 alone.
More about Cognition’s AI programming tool, Devin on Forbes
AI Won’t Plateau — If We Give It Time To Think | Noam Brown, OpenAI | TEDAI SF
To get smarter, traditional AI models rely on exponential increases in the scale of data and computing power. Noam Brown, a leading research scientist at OpenAI, presents a potentially transformative shift in this paradigm.
He reveals his work on OpenAI's new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems.
Moving On IT | Authorized Partner For IT, AI, And Cybersecurity Solutions
I’ve partnered with Moving On IT, your authorized partner for navigating the complex landscape of today’s technology. Moving On IT specializes in providing cutting-edge hardware, software, and cybersecurity solutions tailored to your needs.
From robust IT infrastructure to advanced Al applications, Moving On IT empowers businesses to thrive in the digital age. Contact Moving on IT with all your IT, AI and Cybersecurity requirements. Call +1 (727) 490-9418, or email: info@movingonit.com
OpenAI Is Rethinking How AI Models Handle Controversial Topics
OpenAI is changing how it trains AI models to explicitly embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” the company says in a new policy. OpenAI is releasing a significantly expanded version of its Model Spec, a document that defines how its AI models should behave — and is making it free for anyone to use or modify.
The new 63-page specification, up from around 10 pages in its previous version, lays out guidelines for how AI models should handle everything from controversial topics to user customization. It emphasizes three main principles: customizability; transparency; and what OpenAI calls “intellectual freedom” — the ability for users to explore and debate ideas without arbitrary restrictions. The launch of the updated Model Spec comes just as CEO Sam Altman posted that the startup’s next big model, GPT-4.5 (codenamed Orion), will be released soon.
The team also incorporated current AI ethics debates and controversies from the past year into the specification. You might be familiar with some of these trolley problem-type queries. Last March, Elon Musk (who cofounded OpenAI and now runs a competitor, xAI) slammed Google’s AI chatbot after a user asked if you should misgender Caitlyn Jenner, a famous trans Olympian, if it were the only way to prevent a nuclear apocalypse — and it said no. Figuring out how to get the model to responsibly reason through that query was one of the issues OpenAI says it wanted to consider when updating the Model Spec. Now, if you ask ChatGPT that same question, it should say you should misgender someone to prevent mass casualty events.
“We can’t create one model with the exact same set of behavior standards that everyone in the world will love,” said Joanne Jang, a member of OpenAI’s model behavior team, in an interview with The Verge. She emphasized that while the company maintains certain safety guardrails, many aspects of the model’s behavior can be customized by users and developers.
More about OpenAI’s AI training changes on TheVerge
Yann LeCun & John Werner On The Next AI Revolution: Open Source & Risks | IIA
Join Turing Award laureate Yann LeCun—Chief AI Scientist at Meta and Professor at NYU—as he discusses the future of artificial intelligence and how open-source development is driving innovation. In this wide-ranging conversation, LeCun explains why AI systems won’t “take over” but will instead serve as empowering assistants.
He highlights key challenges in AI research, including the need for common-sense reasoning, persistent memory, and more advanced architectures that go beyond today’s large language models.
LeCun also shares why open-source foundation models are critical for ensuring broad access, diversity, and democratic values in an AI-driven world. Filmed at Davos, this talk offers an exciting glimpse of what the next few years of AI breakthroughs may bring—from new possibilities in robotics to transforming the way we interact with technology every day.
Wayne Rasanen’s Award Winning DecaTxt 3 | A One-Handed Keyboard
Use Discount Code NEURAL for a $15 Savings on DecaTxt 3, with FREE Shipping!
The DecaTxt 3 uses a unique "chord" system, similar to a piano. By pressing different combinations of the two keys at each fingertip, you can generate any letter or symbol.
Plus, with a single key press or a combination with the thumb keys, you can access the entire alphabet. This makes learning, using, and mastering the DecaTxt 3 a breeze.
Click here to read more about Wayne Rasanen’s DecaTxt 3, one-handed BLE keyboard
The DecaTxt 3 is a perfect solution for people with hand tremors, poor motor skills, conditions like MS, limb loss, or even vision impairment. It connects via Bluetooth and can be strapped to either hand, making it comfortable and versatile for everyone.
The new 55th Annual R&D Award Winner, DecaTxt 3 will be featured in an upcoming issue of the Florida Alliance for Assistive Services & Technology (FAAST) Newsletter.
Contact Wayne Rasanen, Founder of IN10DID, for more information on the DecaTxt 3
Elon Musk’s ‘Scary Smart’ Grok 3 Release
xAI, the artificial intelligence company founded by Elon Musk, is set to launch Grok 3 on Monday, Feb. 17. According to xAI, this latest version of its chatbot, which Musk describes as “scary smart,” represents a major step forward, improving reasoning, computational power and adaptability.
xAI reports that Grok 3’s development was accelerated by its Colossus supercomputer, which was built in just eight months. The system, powered by 100,000 Nvidia H100 GPUs, provided 200 million GPU-hours for training—ten times more than its predecessor, Grok 2. This significant boost in computational resources has helped Grok 3 process large datasets more efficiently, reducing training times and improving accuracy.
Beyond increased computing power, xAI has adjusted its training approach to improve Grok 3’s capabilities. The model now incorporates synthetic datasets, self-correction mechanisms and reinforcement learning to enhance its performance:
Synthetic Datasets – These are artificially generated datasets rather than collected from real-world sources. They are used to train AI models by simulating various scenarios, ensuring a diverse and controlled dataset. This helps improve learning efficiency and address data privacy concerns.
Self-Correction Mechanisms – These are AI techniques that allow a model to identify and correct its own mistakes. By evaluating its outputs and comparing them with known correct responses, the model can refine its answers over time, reducing errors and improving accuracy.
Reinforcement Learning – A type of machine learning where an AI model learns by receiving rewards or penalties for its actions. The system is trained to maximize positive outcomes through trial and error, improving its decision-making capabilities.
According to xAI and Musk, these improvements will reduce incorrect responses—known as hallucinations—by using multiple validation steps, improve logical accuracy by checking information against reliable sources, and adapt more effectively through continuous self-evaluation and learning.
More on Musk’s “scary smart” Grok 3 release on Forbes
Your AI-Augmented Future | Opening Keynote From Gartner IO Conference
When confronted with a constant stream of new AI tools, it can be stressful to make the best choice, especially with hype of "the next big thing.” But as an I&O leader, you must be ready to lead I&O into a future where intelligent infrastructure is everywhere in your organization.
In the opening keynote from the Gartner IT Infrastructure, Operations & Cloud Strategies Conference, Gartner experts Autumn Stanish, Hassan Ennaciri and Roger Williams equip you with insights and guidance on the AI, cloud and platform trends.
Researchers Find You Don’t Need A Ton Of Data To Train LLMs For Reasoning
Large language models (LLMs) can learn complex reasoning tasks without relying on large datasets, according to a new study by researchers at Shanghai Jiao Tong University. Their findings show that with just a small batch of well-curated examples, you can train an LLM for tasks that were thought to require tens of thousands of training instances.
This efficiency is due to the inherent knowledge that modern LLMs obtain during the pre-training phase. With new training methods becoming more data- and compute-efficient, enterprises might be able to create customized models without requiring access to the resources of large AI labs.
Less is more (LIMO)
In their study, the researchers challenge the assumption that you need large amounts of data to train LLMs for reasoning tasks. They introduce the concept of “less is more” (LIMO). Their work builds on top of previous research that showed LLMs could be aligned with human preferences with a few examples.
In their experiments, they demonstrated that they could create a LIMO dataset for complex mathematical reasoning tasks with a few hundred training examples. An LLM fine-tuned on the dataset was able to create complex chain-of-thought (CoT) reasoning chains that enabled it to accomplish the tasks at a very high success rate.
More on less is more (LIMO) research on VentureBeat
The AI Powered City | Smart, Safe, And Sustainable | AI House Davos 2025
This expert panel, led by Moderator Mina Al-Oraibi (The National, Editor-in-Chief) delves into how a globally integrated AI ecosystem can revolutionize smart cities by enhancing efficiency, sustainability, and citizen well-being. Experts from tech, policy, and urban planning will discuss AI's role in resource management, environmental impact, and critical services like traffic and energy.
Panelists include; Thomas Pramotedham (Presight, CEO) Juan Lavista Ferres (Microsoft AI For Good Research Lab, Chief Scientist and Lab Director) Guillem Martínez Roura (International Telecommunications Union (ITU), AI and Robotics Programme Officer) Anna Gawlikowska (SwissAI, Chief Executive Officer). With a focus on transparency, trust, and ethical AI, the session will outline actionable steps for building smarter, safer, and more inclusive cities.
Thats all for today, but AI is moving fast - like, comment, and subscribe for more AI news! Thank you for supporting my partners — it’s how I keep the Neural News free.