Anthropic raises another $4B from Amazon
The company has agreed to train its flagship generative AI models primarily on Amazon Web Services (AWS), Amazon’s cloud computing division.
The OpenAI rival also said it’s working with Annapurna Labs, AWS’ chipmaking division, to develop future generations of Trainium accelerators, AWS’ custom-built chips for training AI models.
“Our engineers work closely with Annapurna’s chip design team to extract maximum computational efficiency from the hardware, which we plan to leverage to train our most advanced foundation models,” Anthropic wrote in a blog post. “Together with AWS, we’re laying the technological foundation — from silicon to software — that will power the next generation of AI research and development.”
In its own post, Amazon clarified that Anthropic will use Trainium — including the current version of Trainium, Trainium2 — to train its upcoming models. The AI startup would use Inferentia, Amazon’s in-house chip meant to accelerate model running and serving, to deploy those models, Amazon said.
“By collaborating with Anthropic on the development of our custom Trainium chips, we’ll keep pushing the boundaries of what customers can achieve with generative AI technologies,” AWS CEO Matt Garman said in a statement. “We’ve been impressed by Anthropic’s pace of innovation and commitment to responsible development of generative AI, and look forward to deepening our collaboration.”
Read how many more billions Amazon has invested in Anthropic
Eric Schmidt unveils new book on the future of AI at Princeton University
Eric Schmidt, an accomplished technologist, entrepreneur and philanthropist known for his pivotal role as former chairman and chief executive officer of Google, will return to his alma mater Princeton on Nov. 20 to discuss “Genesis: Artificial Intelligence, Hope, and the Human Spirit,” the new book he co-authored with Craig Mundie and the late Henry Kissinger.
Charting a course between blind faith and unjustified fear, “Genesis” was written to help today’s decision-makers seize the opportunities presented by artificial intelligence (AI) without falling prey to the darker forces it can unleash, according to its publisher, Little, Brown and Co.
The book advances the urgent conversation surrounding AI with a broad and incisive view of the technology’s potential impact on humanity and the planet. “Individuals, nations, cultures and faiths … will need to decide whether to allow AI to become an intermediary between humans and reality,” the authors write. They analyze AI’s promise and perils in seven vital areas — discovery, truth, politics, security, prosperity, science and fate — and warn of how quickly change may come. “AI seems to compress human timescales. Objects in the future are closer than they appear.”
JPMorgan's AI rollout: Jamie Dimon's a 'tremendous' user and it's caused some 'healthy competition' among teams
Before a business review with JPMorgan CEO Jamie Dimon, Teresa Heitsenrether runs her presentation through one of the bank's generative AI tools to help her pinpoint the message she wants to convey to the top boss. "I say, what is the message coming out of this? Make it more concise. Make it clear. And it certainly has helped with that," Heitsenrether, who is responsible for executing the bank's generative AI strategy, told a conference in New York on Thursday.
Dimon himself is a "tremendous user," she said, and is waiting for the ability to use the bank's tools on his phone. "He's been desperate to get it on his phone and so that's a big deliverable before the end of the year, " Heitsenrether added. JPMorgan, America's largest bank, has now rolled out the LLM Suite, a generative AI assistant, to 200,000 of its employees.
The tools are the first step in adopting AI technology across the firm. Heitsenrether, JPMorgan's chief data and analytics officer, speaking at the Evident AI Symposium, said that the next generation would go beyond helping employees write an email or summarize a document and link the tools with their everyday workflow to help people do their jobs.
"Basically go from the five minutes of efficiency to the five hours of efficiency," she added, saying it will take time to reach that goal.
Read more about JPM’s AI adoption on Yahoo Finance
No Priors Ep. 91 | With Cohere Co-Founder and CEO Aidan Gomez
In this episode of No Priors, Sarah is joined by Aidan Gomez, cofounder and CEO of Cohere. Aidan reflects on his journey to co-authoring the groundbreaking 2017 paper, “Attention is All You Need,” during his internship, and shares his motivations for building Cohere, which delivers AI-powered language models and solutions for businesses.
The discussion explores the current state of enterprise AI adoption and Aidan’s advice for companies navigating the build vs. buy decision for AI tools. They also examine the drivers behind the flattening of model improvements and discuss where large language models (LLMs) fall short for predictive tasks.
The conversation explores what the market has yet to account for in the rapidly evolving AI ecosystem, as well as Aidan’s personal perspectives on AGI—what it might look like and when it could arrive.
I have partnered with Logictry to spread the word about their AI platform services.
This case study is just one of the many different use cases for the Logictry platform.
If you’re interested in learning more about the Logictry platform services message me.
AI chip startup MatX, founded by Google alums, raises Series A funding round at $300M+ valuation, sources say
MatX, a startup designing chips that support large language models, has raised a Series A of approximately $80 million, three sources say, less than a year after raising its $25 million seed round. Spark Capital led the investment, valuing the company at a pre-money valuation in the mid-$200 million range and a post-money valuation in the low $300 million range, a person who reviewed the deal told TechCrunch.
MatX was co-founded two years ago by Mike Gunter, who previously worked at Google on the design of Tensor Processing Units (TPUs), the tech giant’s AI chips, and Reiner Pope, who also came from Google’s TPU team, where he wrote AI software.
Gunter and Pope hope to help ease the shortage of chips designed to handle AI workloads. They say the sweet spot for their chips are AI workloads of “at least” 7 billion, and “ideally” 20 billion or more activated parameters. And they boast that their chips deliver high performance at more affordable prices, according to MatX’s website. The startup says its chips are particularly good at scaling to large clusters because of MatX’s advanced interconnect, aka the communication pathways that AI chips use to transfer information.
Read more about MatX’s fundraising activities on TechCrunch
Vertical AI Agents Could Be 10X Bigger Than SaaS | YC’s Lightcone Podcast
As AI models continue to rapidly improve and compete with one another, a new business model is coming into view - vertical AI agents.
In this episode of the Lightcone podcast, the hosts consider what effect vertical AI agents will have on incumbent SaaS companies, what use cases make the most sense, and how there could be 300 billion dollar companies in this category alone.
Stretching From LLMs To LGMs: Intelligence And The Amazing Promise Of Large Geospatial Models
In today’s column, I examine the advancement of large language models (LLMs) toward a new generation of large geospatial models (LGM), an exciting and innovative frontier for the advent and extension of AI.
Here’s the deal. Humans possess and greatly rely upon a sense of geospatial awareness and akin reasoning. We wrap that capability into our other powers of intelligence. Some would argue that they go hand-in-hand, namely that to some degree, our intelligence is spurred by our geospatial knack, and the ability to discern geospatial facets leans strongly into overall intelligence.
Conventional generative AI and LLMs don’t particularly have any such geospatial capacities. You might say that this is a vitally missing component. To solve this, we ought to leverage LLMs and connect them with or eventually intermix geospatial tendencies. The catchy moniker for that enterprising combination is said to be large geospatial models or LGMs. Boom, drop the mic. Let’s talk about it.
This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities.
Read more of Lance Eliot’s geospatial proposition on Forbes
Building the Easy Button for Generative AI | May Habib, CEO, Writer
In this episode, we dive into the world of generative AI with May Habib, co-founder of Writer, a platform transforming enterprise AI use. May shares her journey from Qordoba to Writer, emphasizing the impact of transformers in AI.
We explore Writer's graph-based RAG approach, and their AI Studio for building custom applications. We also discuss Writer's Autonomous Action functionality, set to revolutionize AI workflows by enabling systems to act autonomously, highlighting AI's potential to accelerate product development and market entry with significant increases in capacity and capability.
Thats all for today, however new advancements, investments, and partnerships are happening as you read this. AI is moving fast, subscribe today to stay informed.