Meta Spending Soaring To $65 Billion On Artificial Intelligence, Massive Data Centers
Social-media giant to spend between $60 billion and $65 billion, Zuckerberg says.
Mark Zuckerberg announced a huge leap in Meta Platforms’s capital spending this year to between $60 billion to $65 billion, an increase driven by artificial intelligence and a massive new data center.
The plan to increase the company’s capital expenditures by as much as roughly 70% over 2024 comes days after tech rivals including OpenAI unveiled a $500 billion spending plan backed by President Trump called Stargate.
“This will be a defining year for AI,” Zuckerberg said in a post on Facebook. “This is a massive effort, and over the coming years it will drive our core products and business, unlock historic innovation, and extend American technology leadership. Let’s go build!” Meta operates a suite of AI products, including an open-source model that developers can build on top of and AI chatbots embedded in its apps. The company is also planning to build an AI engineer that will start writing its own code, Zuckerberg said Friday. The company’s shares rose by less than 1% in early trading.
The spending plan is a roughly $14 billion jump from 2025 analyst projections, according to FactSet. Meta has been ramping up spending on AI over the past few years. The company hasn’t released the 2024 capital expenditure number yet, but analysts expect it will come in around $38 billion, already a 40% jump from 2023.
In 2024, Meta broke ground on six new data centers. This year, the company plans to bring one gigawatt of computing power online and build out a data center in Louisiana that is “so large it would cover a significant part of Manhattan.” Meta expects to end the year with more than 1.3 million graphic processing units, commonly known as GPUs, Zuckerberg said.
More on Meta’s spending on data centers and AI on WSJ
7 Disruptions Through 2029 You Might Not See Coming | Daryl Plummer: Gartner
Disruptions aren't just temporary shifts; they pave the way for lasting change. The challenge for CIOs? Staying ahead of the curve amidst increasing disruptions. This will decide who leads and who follows.
In his session from Gartner IT Symposium/Xpo 2024, Gartner Distinguished VP Analyst and Chief of Research Daryl Plummer delves into several of the imminent disruptions that need to be on every CIO's strategic radar.
Bruce Burke Participating In Forbes Entrepreneur Of Impact Competition
VOTING IS NOW OPEN - I HAVE TO ASK - PLEASE VOTE FOR BRUCE BURKE
Exciting News! I have been selected to participate in the Entrepreneur of Impact competition. One visionary winner will be featured in Forbes, receive $25,000, and have a one-on-one mentoring session with the Shark Tank's own Daymond John.
CLICK HERE TO VOTE FOR BRUCE BURKE IN ENTREPRENEUR OF IMPACT
I'm proposing building an AI-powered, fully automated news and information organization that creates news articles, videos, podcasts, deep dives, special reports, white papers, and more — focused on the ever-expanding world of AI.
Voting is now open, I would appreciate your vote and will be posting again when voting starts. I have setup my profile that outlines my proposal linked below.
CLICK HERE TO VOTE FOR BRUCE BURKE IN ENTREPRENEUR OF IMPACT
Hugging Face Platform Shrinks Artificial Intelligence Vision Models To Phone-Friendly Size, Slashing Computing Costs
Hugging Face has achieved a remarkable breakthrough in AI, introducing vision-language models that run on devices as small as smartphones while outperforming their predecessors that require massive data centers.
The company’s new SmolVLM-256M model, requiring less than one gigabyte of GPU memory, surpasses the performance of its Idefics 80B model from just 17 months ago — a system 300 times larger. This dramatic reduction in size and improvement in capability marks a watershed moment for practical AI deployment.
“When we released Idefics 80B in August 2023, we were the first company to open-source a video language model,” Andrés Marafioti, machine learning research engineer at Hugging Face, said in an exclusive interview with VentureBeat. “By achieving a 300x size reduction while improving performance, SmolVLM marks a breakthrough in vision-language models.”
Smaller AI models that run on everyday devices
The advancement arrives at a crucial moment for enterprises struggling with the astronomical computing costs of implementing AI systems. The new SmolVLM models — available in 256M and 500M parameter sizes — process images and understand visual content at speeds previously unattainable at their size class.
The smallest version processes 16 examples per second while using only 15GB of RAM with a batch size of 64, making it particularly attractive for businesses looking to process large volumes of visual data. “For a mid-sized company processing 1 million images monthly, this translates to substantial annual savings in compute costs,” Marafioti told VentureBeat. “The reduced memory footprint means businesses can deploy on cheaper cloud instances, cutting infrastructure costs.”
More on SmolVLM-256M model on VentureBeat
AI Revolution: Why This Is The Best Time To Start A Startup | Lightcone Podcast
In this special episode of Lightcone, we’re joined by YC partner and creator of Gmail Paul Buchheit to dig into some of the latest trends in the world of AI startups.
We recorded our conversation at a recent retreat where 300 of the top AI founders in the world gathered to share expertise and make predictions about how this technology will shape our future.
In the discussion we cover a wide range of topics including: The future of work, the power of agency and taste in an AI world and why this is the absolute best time to be building a startup.
Moving On IT | Authorized Partner For IT, AI, And Cybersecurity Solutions
I’ve partnered with Moving On IT, your authorized partner for navigating the complex landscape of today’s technology. Moving On IT specializes in providing cutting-edge hardware, software, and cybersecurity solutions tailored to your needs.
From robust IT infrastructure to advanced Al applications, Moving On IT empowers businesses to thrive in the digital age. Contact Moving on IT with all your IT, AI and Cybersecurity requirements. Call +1 (727) 490-9418, or email: info@movingonit.com
Check out the latest Moving On IT press release on CIO Dive | CLICK HERE
Anthropic Just Released A Major New Feature To Make Your AI Smarter
It’s called Citations, and it means that the AI known as Claude can link back to specific spots in documents to let you know how it arrived at an answer.
Artificial intelligence startup Anthropic has launched a new feature for its “Claude” family of AI models, one that enables the models to cite and link back to sources when answering questions about uploaded documents. The new feature, appropriately dubbed “Citations,” is now available for developers through Anthropic’s API.
In a blog post announcing the new feature, Anthropic said that users can now upload source documents to Claude, and the model will reference these sources when answering questions. It’ll also link back to the specific sections of documents where it found an answer. Through this process, Anthropic claims they’ve been able to improve Claude’s accuracy by up to 15 percent.
An early use case of the new feature comes from news and legal organization Thomson Reuters, whose generative AI legal assistant, CoCounsel, is powered by Claude. The company’s head of product for CoCounsel, Jake Heller, said that his team originally built their own custom prompt engineering solution for citing sources when analyzing legal documents, “but it was really hard to build and maintain.” With the Citations feature, Heller said citing sources and generating links is much easier, which has in turn bolstered trust in the AI system’s accuracy.
Anthropic anticipates that Citations will be used to summarize long documents while making it easier to verify information, answer questions about documents more insightfully, and, in the case of customer support, reference “multiple product manuals, FAQs, and support tickets, always citing the exact source of information.”
More on Anthropic’s new Citations feature on Inc.
OpenAI Chairman Bret Taylor On The New Jobs AI Will Usher Into The Future
What new kinds of jobs will AI bring that we never could have imagined before? In this special two-part episode, Reid and Aria explore this question and more with Sierra co-founder and OpenAI chairperson Bret Taylor.
Part one features audio from Bret's onstage interview at the 2024 Masters of Scale Summit, where he shared his insights on the voice revolution in AI, the technology’s latest role as a phone customer service agent, and the groundbreaking business opportunities still waiting to be explored in the AI space.
Reid and Aria invited Bret to continue the conversation in part two, diving deeper into how AI might reshape our workforce, create new career opportunities, and spark industries we haven’t yet imagined.
With leadership experience at both startups (Quip and Friendfeed) and large tech companies (Facebook, Salesforce, and Twitter), Bret is uniquely positioned to track and seize opportunities in the quickly-developing AI industry.
Logictry’s AI-Driven Platform | Helps You Make Smarter Decisions Faster
I’ve partnered with Logictry, an AI platform that helps you make smarter decisions faster. Check out the case study linked below how National Instruments utilized the Logictry platform to enable their sales, as well as external distributors and partners.
If you’d like more information about use cases for the Logictry platform message me.
Watch ChatGPT's Operator AI Agent Solve A CAPTCHA Like A Human Being
It’s 2025, and we still have to deal with CAPTCHAs on the web, the online browsing disruption we never wanted and can’t get rid of. Then again, CAPTCHAs are there to protect websites from abuse by malicious actors. With that in mind, it’s pretty obvious why sites continue to use them.
However, with the upcoming wave of AI agents that can browse the web and perform actions on our behalf, CAPTCHAs might become a thing of the past. That is, services like ChatGPT Operator might be able to deal with CAPTCHAs on our behalf.
Can AI agents reliably click on all images showing motorcycles or traffic lights for us? It might be too early to tell, considering that a robot will essentially have to tell a website that it is not a robot. However, it looks like at least one Operator user was able to have the AI agent beat CAPTCHAs for him.
OpenAI announced Operator on Thursday, making it available for testing to ChatGPT users on the $200/month Pro subscription. I already explained that I wouldn’t pay that much to act as a tester for the technology, no matter how brilliant I think OpenAI’s take on Operator might be.
But I also said that if you use other ChatGPT Pro perks, accessing Operator is a no-brainer if you’re in the US and can use it. I can’t wait to use Operator myself once available in the EU for the cheaper ChatGPT tiers. One ChatGPT user who got their hands on Operator early posted a video on Reddit that shows how the AI agent deals with CAPTCHAs involving images.
More on ChatGPT’s Operator ability to solve CAPTCHAs on BGR
Accenture’s Julie Sweet: Physical AI Is The Next Big Thing—Write That Down
Julie Sweet, CEO of Accenture, joins CNBC's Andrew Ross Sorkin at the World Economic Forum in Davos to discuss key global issues such as AI, tariffs, and workforce transformation. She emphasizes Europe's focus on competitiveness amidst tariff uncertainties and highlights varying levels of economic confidence worldwide.
Thats all today, but AI is moving fast, Like, comment, and subscribe for more AI news! Please vote for me in the Entrepreneur of Impact Competition every day!
Thank you for supporting my partners, it’s how I keep Neural News Network free.