Pruna Raises $6.5 Million Compressing AI Models To Make Them More Efficient
Check out the pitch deck this startup used to raise $6.5 million.
Paris and Munich-based software startup Pruna AI has secured $6.5 million from EQT Ventures. The startup has developed an optimization tool for compressing AI models so that they require less compute and energy.
"It's similar to a zip that compresses files, so they are cheaper, faster, and greener, and everyone can access and use those AI models," Bertrand Charpentier, president and chief scientist officer at Pruna AI, told Business Insider.
Users can run their model through Pruna's optimization engine, which the startup says renders the AI model smaller, cheaper, and greener on any hardware platform. It spans everything from natural language processing to images and audio systems.
The startup's customers are midsize to large companies with their own AI models that can rake in huge compute bills. Charpentier said that using Pruna's platform has been a win-win for both parties, as companies can significantly reduce the costs of running these AI models. Pruna charges its customers based on the usage of the compressed model, which is typically much cheaper to run than the non-compressed version.
Read more about Pruna AI’s growth on Yahoo Tech
Transform Your Business With Agentic AI
Agentic AI is transforming every enterprise, using sophisticated reasoning and iterative planning to solve complex, multi-step problems. Learn how NVIDIA AI Blueprints help turn data into knowledge and knowledge into action by automating processes, tapping into real-time insights, and improving workflow efficiency at scale.
The Simple ChatGPT Trick That Will Transform Your Business AI Interactions
I believe ChatGPT and other generative AI tools can help pretty much any business. With a low-cost subscription or even simply using free tools, advanced AI assistance that would have seemed the stuff of science fiction just a few short years ago is within reach of anyone.
Without specific information, though, the advice and output that these genAI tools give can be very formulaic and mundane. Simply ask it to “write me a blog on subject X” or “create a business plan for my Y business” and you’ll see what I mean. The problem is that by default, it doesn’t know enough about you or your business and its specific challenges to create anything very useful
Luckily, there’s a way to ensure it has the information it needs to give you very specialized, specific advice that’s relevant to your opportunities and challenges. Get it to ask you what it needs to know.
Read more about simple ChatGPT tricks on Forbes
LangChain Vs LangGraph: A Tale of Two Frameworks
Get ready for a showdown between LangChain and LangGraph, two powerful frameworks for building applications with large language models (LLMs.) Master Inventor Martin Keen compares the two, taking a look at their unique features, use cases, and how they can help you create innovative, context-aware solutions.
Indian Government Working On Code Of Conduct For AI Companies
The Ministry of Electronics and Information Technology is reportedly working on voluntary codes of conduct and ethics for companies to follow for the work they do with AI and GenAI.
As per an ET report, these guidelines will serve as “informal directive principles,” targeting companies that create large language models (LLMs) or utilize data for training AI and machine learning models. The voluntary code of conduct is expected to be released early next year.
“A law on AI is still some time away. We are talking to all stakeholders right now to see what can be included and trying to get the industry onboard on a common set of principles and guidelines,” an official told ET.
This code is likely to include broad principles outlining measures companies can adopt during the training, deployment, and commercial sale of their LLMs and AI platforms. It will also emphasize identifying and addressing potential instances of misuse of these technologies, according to a second official.
Read more about the voluntary AI code of conduct
Sir David Attenborough Says AI Clone Of His Voice Is 'Disturbing' | BBC News
"I am profoundly disturbed to find these days my identity is being stolen by others and greatly object to them using is to say whatever they wish."
That's how broadcaster and biologist Sir David Attenborough has reacted after the BBC played him clips of his voice being mimicked by Artificial Intelligence.
Dr Jennifer Williams, a researcher of AI audio, explains the issues of voices of prominent figures such as Sir David being cloned.
Francois Chollet, Creator of Keras, Leaves Google
Francois Chollet—an AI pioneer and the creator of Keras—is leaving Google, the latest in a string of AI pioneers to leave the company. Keras is a Python deep learning API that bills itself as “a superpower for developers.”
The purpose of Keras is to give an unfair advantage to any developer looking to ship Machine Learning-powered apps. Keras focuses on debugging speed, code elegance & conciseness, maintainability, and deployability. When you choose Keras, your codebase is smaller, more readable, easier to iterate on.
According to a blog post by Bill Jia, VP of Engineering for Core ML, and Xavi Amatriain, VP of ACE (AI and Compute Enablement) said Chollet is leaving.
Today, we’re announcing that Francois Chollet, the creator of Keras and a leading figure in the AI world, is embarking on a new chapter in his career outside of Google. While we are sad to see him go, we are incredibly proud of his immense contributions and excited to see what he accomplishes next.
More about Francois Chollet’s departure from Google
Debunking AI Doom | Nora Belrose
Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety.
Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety.
She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture.
The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks.
Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor.
Thats all for today, however new advancements, investments, and partnerships are happening as you read this. AI is moving fast, subscribe today to stay informed.