Reliance Industries Plans World's Biggest AI Data Center In Jamnagar Gujarat India
In partnership with Nvidia, Reliance plans to build the world's largest AI data centre in Gujarat, marking a milestone in India's AI journey and digital infrastructure growth.
Mukesh Ambani’s Reliance Industries is set to build the world’s largest data centre in Jamnagar, Gujarat, according to a Bloomberg News report. The facility would dwarf the current largest data center, Microsoft’s 600-megawatt site in Virginia. The project could cost between $20 billion to $30 billion, the report added.This project marks a significant step in Reliance’s entry into India’s artificial intelligence (AI) sector.
The company has reportedly procured AI semiconductors from Nvidia, a global leader in AI technology. This follows the October 2024 announcement at the Nvidia AI Summit, where Reliance and Nvidia revealed plans to collaborate on AI infrastructure. Nvidia committed to supplying its advanced Blackwell AI processors for a one-gigawatt data centre that Reliance plans to establish.
Commenting on India’s AI potential at the summit, Jensen Huang, Nvidia’s CEO, stated: “It makes complete sense that India should manufacture its own AI. You should not export data to import intelligence. India should not export flour to import bread.”
Mukesh Ambani echoed this sentiment, highlighting India’s strong digital connectivity infrastructure. He said: “We can use intelligence to actually bring prosperity to all the people and bring equality to the world. Apart from the US and China, India has the best digital connectivity infrastructure.”
Reliance and Nvidia’s growing partnership
In September 2024, Reliance Industries and Nvidia partnered to develop AI supercomputers and large language models (LLMs) tailored to India’s diverse languages. This collaboration underscores Reliance’s ambition to leverage AI for India’s unique needs. Nvidia later announced a similar partnership with the Tata Group, reflecting its commitment to India’s AI-driven growth.
More on Reliance’s data center and relationship with Nvidia
Google DeepMind CEO Demis Hassabis: The Path To AGI, Deceptive AIs, Building A Virtual Cell | Big Technology Podcast
Demis Hassabis is the CEO of Google DeepMind. He joins Big Technology Podcast to discuss the cutting edge of AI and where the research is heading. In this conversation, we cover the path to artificial general intelligence, how long it will take to get there, how to build world models, whether AIs can be creative, and how AIs are trying to deceive researchers.
Stay tuned for the second half where we discuss Google's plan for smart glasses and Hassabis's vision for a virtual cell. Hit play for a fascinating discussion with an AI pioneer that will both break news and leave you deeply informed about the state of AI and its promising future with Alex Kantrowitz and Demis Hassabis.
Bruce Burke Participating In Forbes Entrepreneur Of Impact Competition
VOTING IS NOW OPEN - I HAVE TO ASK - PLEASE VOTE FOR BRUCE BURKE
Exciting News! I have been selected to participate in the Entrepreneur of Impact competition. One visionary winner will be featured in Forbes, receive $25,000, and have a one-on-one mentoring session with the Shark Tank's own Daymond John.
CLICK HERE TO VOTE FOR BRUCE BURKE IN ENTREPRENEUR OF IMPACT
I'm proposing building an AI-powered, fully automated news and information organization that creates news articles, videos, podcasts, deep dives, special reports, white papers, and more — focused on the ever-expanding world of AI.
Voting is now open, I would appreciate your vote and will be posting again when voting starts. I have setup my profile that outlines my proposal linked below.
CLICK HERE TO VOTE FOR BRUCE BURKE IN ENTREPRENEUR OF IMPACT
China's Most Popular AI App Gets Facelift With ByteDance's Doubao 1.5
TikTok parent ByteDance has launched an updated version of Doubao, China's most popular consumer-facing artificial intelligence (AI) app, as the tech giant accelerates AI development despite US export restrictions on advanced chips. The Beijing-based company introduced its closed-source multimodal model Doubao 1.5 Pro on Wednesday, emphasizing a "resource-efficient" training approach that it said does not sacrifice performance.
"The model adopted an integrated train-inference design from the pre-training phase to balance between the best performance and most optimal inferencing cost," ByteDance said in a statement, adding that it has designed a server cluster with flexible support for low-end chips to bring down the AI infrastructure costs. China's Big Tech firms are striving to catch up with their US counterparts while facing budget constraints and limited access to advanced chips. This has pushed them to innovate in AI model efficiency, refining their products within the country's closed market.
Benchmark tests have shown that Doubao 1.5 Pro excels in half of 14 evaluations that assessed the model's language understanding, maths and coding skills, domain knowledge, visual understanding and reasoning abilities. In some areas it outperformed industry-leading AI systems from Microsoft-backed OpenAI, Google, and Amazon.com-backed Anthropic. It also bested domestic rivals in some tests, including systems from recent start-up darling DeepSeek and cloud computing giant Alibaba Group Holding.
More on ByteDance’s Doubao AI app on Yahoo Tech
Scale AI CEO Alexandr Wang On U.S.-China AI Race: “We Need To Unleash U.S. Energy To Enable AI Boom”
Scale AI founder and CEO Alexandr Wang joins 'Squawk Box' to discuss the AI landscape in 2025, state of the AI arms race between U.S. and China, impact of the U.S. chip export controls, future of AI development, his thoughts on the $500 billion Stargate project, AI competition in the U.S., DEI vs. 'MEI' in corporate America, and more in this conversation with CNBC’s Andrew Ross Sorkin at annual Davos meeting.
Moving On IT | Authorized Partner For IT, AI, And Cybersecurity Solutions
I’ve partnered with Moving On IT, your authorized partner for navigating the complex landscape of today’s technology. Moving On IT specializes in providing cutting-edge hardware, software, and cybersecurity solutions tailored to your needs.
From robust IT infrastructure to advanced Al applications, Moving On IT empowers businesses to thrive in the digital age. Contact Moving on IT with all your IT, AI and Cybersecurity requirements. Call +1 (727) 490-9418, or email: info@movingonit.com
Check out the latest Moving On IT press release on CIO Dive | CLICK HERE
OpenAI’s ‘o3-mini’ Is Free For All Users — What You Need To Know | Tom’s Guide
OpenAI CEO Sam Altman announced today (January 23) that the free tier of ChatGPT will now use the o3-mini model, marking a significant shift in how the popular AI chatbot serves its user base. In the same tweet announcing the change, Altman revealed that paid subscribers to ChatGPT Plus and Pro plans will enjoy “tons of o3-mini usage,” giving people an incentive to move to a paid account with the company.
The o3-mini model is part of OpenAI’s latest advancements in its generative AI technology. Although smaller in scale compared to the flagship GPT-4-turbomodel, o3-mini promises faster response times, reduced computational requirements, and the ability to handle simpler queries with ease.
This move aims to improve the user experience for free-tier users while allocating premium resources, like GPT-4-turbo, for paid subscribers who rely on the platform for professional and intensive use cases.
This update comes shortly after ChatGPT went down on Thursday, January, 23, at a time when OpenAI is focusing on balancing resource demands. ChatGPT’s free tier has historically been powered by earlier versions of GPT models, but integrating o3-mini represents a strategic pivot toward efficiency. It’s designed to serve the needs of everyday users with lightweight tasks like casual queries, brainstorming, and conversational interaction.
More on OpenAI’s o3-mini bring made free on Tom’s Guide
Time Series Forecasting with Lag Llama
Forecasting the future just got a whole lot more precise! Join IBM’s Meredith Mante as she takes you on a deep dive into Lag Llama, an open-source foundation model, and shows you how to harness its power for time series forecasting.
Learn how to load and preprocess data, train a model, and evaluate its performance, gaining a deeper understanding of how to leverage Lag Llama for accurate predictions.
Logictry’s AI-Driven Platform | Helps You Make Smarter Decisions Faster
I’ve partnered with Logictry, an AI platform that helps you make smarter decisions faster. Check out the case study linked below how National Instruments utilized the Logictry platform to enable their sales, as well as external distributors and partners.
If you’d like more information about use cases for the Logictry platform message me.
No Retraining Needed: Sakana's New AI Model Changes How Machines Learn
Researchers at Sakana AI, an AI research lab focusing on nature-inspired algorithms, have developed a self-adaptive language model that can learn new tasks without the need for fine-tuning. Called Transformer² (Transformer-squared), the model uses mathematical tricks to align its weights with user requests during inference.
This is the latest in a series of techniques that aim to improve the abilities of large language models (LLMs) at inference time, making them increasingly useful for everyday applications across different domains.
Dynamically adjusting weights
Usually, configuring LLMs for new tasks requires a costly fine-tuning process, during which the model is exposed to new examples and its parameters are adjusted. A more cost-effective approach is “low-rank adaptation” (LoRA), in which a small subset of the model’s parameters relevant to the target task is identified and modified during fine-tuning.
After training and fine-tuning, the model’s parameters remain frozen, and the only way to repurpose it for new tasks is through techniques such as few-shot and many-shot learning. In contrast to classic fine-tuning, Transformer-squared uses a two-step approach to dynamically adjust its parameters during inference. First, it analyzes the incoming request to understand the task and its requirements, then it applies task-specific adjustments to the model’s weights to optimize its performance for that specific request.
“By selectively adjusting critical components of the model weights, our framework allows LLMs to dynamically adapt to new tasks in real time,” the researchers write in a blog post published on the company’s website.
More about Sakana’s new Transformer² AI model on VentureBeat
An Introduction To OpenAI’s Operator
Join Sam Altman, Yash Kumar, Casey Chu, and Reiichiro Nakano as they introduce and demo Operator, a new computer-user AI Agent from OpenAI.
Thats all for today, but AI is moving fast, subscribe today to stay informed. Please don’t forget to vote for me in the Entrepreneur of Impact Competition today! Thank you for supporting me and my partners, it’s how I keep NNN free.