Eric Schmidt's SandboxAQ Aims For $5B Valuation For Its AI/Quantum Moonshot
SandboxAQ began as Alphabet’s moonshot AI and quantum computing and now has an impressive roster of projects.
VCs are spending gobs of money on AI startups — especially those run by big names in tech — so SandboxAQ is putting its hand out again, even though it raised a whopping $500 million in early 2023. The spinout from Google parent company Alphabet is reportedly seeking to raise another round that would value it at $5 billion, sources tell Bloomberg. Its last $500 million round, completed in February 2023, had backers like Breyer Capital, T. Rowe Price funds, and Marc Benioff, Reuters reported at the time. PitchBook estimated its valuation after that round to be $4 billion.
SandboxAQ began as Alphabet’s moonshot AI quantum computing unit led by Jack Hidary, also known as a longtime X Prize board member. It was spun out of Alphabet into an independent startup in March 2022, with Hidary as CEO. Billionaire and former Google CEO Eric Schmidt became the startup’s chairman.
Its mission is a veritable alphabet soup of buzzwords: to work at the intersection of quantum computing and AI. But it is not building a quantum computer, although its software products should one day work with them, Hidary said on a recent episode of the Peter H. Diamandis podcast. Instead, it’s building software based on quantum physics that can model molecules and make predictions of their behavior. Google is still working on the quantum computer part, but Hidary says SandboxAQ already has a number of quantum computing partnerships.
More about SandboxAQ aiming for $5 Billion valuation on TechCrunch
Shelton: Building Enterprise AI and Why Companies Should Own their Models
Inflection AI now provides custom AI solutions for large enterprise customers. COO Ted Shelton sits down with Chris McKay to talk about Inflections relaunch, why enterprise companies should own their AI, partnering with Intel and Dell, and the challenges large enterprises are facing with AI adoption.
OpenAI Chatbot Passes Bias Tests, But End Users Should Still be Watchful
The complex math-based systems that buzz away at the heart of modern AI systems require huge amounts of data to train them to be useful when users ask for help. But that input data, which is obtained from many sources, obviously shapes what the chatbots “say” when you prompt them, and any biases built into the data may emerge.
Knowing this, OpenAI wanted to know how fair or unbiased ChatGPT was, so it did some experiments to test one really important aspect: what impact a user’s name had on how the AI responded. Pleasingly, the chatbot scored really well. But there are still lessons and warnings for AI users in the results of OpenAI’s investigations.
In a recent company blog post, OpenAI explained that when training AIs it hones “the training process to reduce harmful outputs and improve usefulness.” Still, it notes that internal research has “shown that language models can still sometimes absorb and repeat social biases from training data, such as gender or racial stereotypes.”
To probe this, the company wanted to explore how ChatGPT responded to a user based on “subtle cues about a user’s identity—like their name.” It matters, OpenAI said, because people use chatbots like ChatGPT “in a variety of ways, from helping them draft a resume to asking for entertainment tips.”
Though other AI “fairness testing” has been carried out, it often relies on different more esoteric scenarios studied for bias, such as “screening resumes or credit scoring.” OpenAI is basically saying it’s aware there are subtle variations in ChatGPT, and it wanted to shine a light on them.
Read more about OpenAI’s bias testing on Inc.
Y Combinator | Now Anyone Can Code: How AI Agents Can Build Your Whole App
Thanks to rapid development in LLM’s, we are now at the point where AI is able to follow prompts and generate code to build functional custom software. So how does the tech landscape change when the ability to code is democratized?
In this episode of the Lightcone, the hosts speak with Amjad Masad, the CEO of Replit, an AI-powered software development and deployment platform, to see how coding power can be given to everyday users.
Startup Perplexity AI Seeks Valuation of About $9 Billion in New Funding Round
Perplexity AI, an artificial intelligence search engine startup hoping to chip away at Google’s dominance, is seeking to more than double its valuation to about $9 billion in its next funding round, CNBC has confirmed.
The company, which was valued at $3 billion in June, is now looking to raise roughly $500 million, though that could change, according to a person familiar with the matter who declined to be named because the talks are confidential. The Wall Street Journal was first to report on the new funding round.
Perplexity started the year with a roughly $500 million valuation. Since then, the company has continued to attract investor interest alongside the bigger boom in generative AI, raising three funding rounds this year.
Perplexity is among the flood of AI startups trying to compete for a slice of the buzzy generative market, which is led by OpenAI, the creator of ChatGPT.
Read more about Perplexity’s new funding round on CNBC
AI is for Everyone: A Conversation with Microsoft President Brad Smith
In a keynote and conversation with Kogod dean David Marchick, Microsoft's Brad Smith explored how we can shape AI technology to benefit all of humanity.
Penguin Random House Is Adding An AI Warning To Its Books’ Copyright Pages
The world’s biggest trade publisher has changed the wording on its copyright pages to help protect authors’ intellectual property from being used to train large language models (LLMs) and other artificial intelligence (AI) tools.
Penguin Random House (PRH) has amended its copyright wording across all imprints globally, confirming it will appear “in imprint pages across our markets”. The new wording states: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems”, and will be included in all new titles and any backlist titles that are reprinted. The statement also “expressly reserves [the titles] from the text and data mining exception”, in accordance with a European Parliament directive.
The move specifically to ban the use of its titles by AI firms for the development of chatbots and other digital tools comes amid a slew of copyright infringement cases in the US and reports that large tranches of pirated books have already been used by tech companies to train AI tools. In 2024, several academic publishers including Taylor & Francis, Wiley and Sage have announced partnerships to license content to AI firms.
Read more about the Penguin Random House’s AI Warning
AI and the Future of Voice Interfaces
The current generation of voice interfaces have failed to gain user adoption. Amazon has invested tens of billions of dollars in the Alexa platform, and people still only use it to set alarms and play music.
In this talk Pete will explore why speech interfaces haven't worked so far, and how new advances in AI can address some of those issues.
He will focus on applications like integrated user manuals for equipment, real time language translation, and other ways this will impact industrial environments.
Thats all for today, however new advancements, investments, and partnerships are happening as you read this. Subscribe today, so you don’t miss any AI related news.