MENU

NVIDIA and T-Mobile advance AI-RAN for edge-based physical AI

NVIDIA and T-Mobile advance AI-RAN for edge-based physical AI

News |
By Asma Adhimi



NVIDIA and T-Mobile are stepping up efforts to bring physical AI applications to the network edge, combining AI infrastructure with 5G connectivity. Announced at GTC, the collaboration also includes Nokia and a growing ecosystem of developers focused on deploying real-time vision and reasoning AI agents.

The development signals a shift in how telecom networks are evolving — from connectivity providers into distributed AI computing platforms. This has implications across industries, from smart cities to industrial automation.

AI-RAN turns networks into edge compute platforms

At the core of the initiative is AI-RAN, which integrates AI workloads directly into radio access networks. T-Mobile has begun piloting NVIDIA’s RTX PRO 6000 Blackwell Server Edition infrastructure alongside Nokia’s anyRAN software, enabling edge AI processing at both cell sites and mobile switching offices.

NVIDIA’s broader AI-RAN portfolio also includes the RTX PRO 4500 Blackwell Server Edition, designed for power-constrained environments. Together, these platforms aim to support distributed AI workloads without compromising 5G performance.

“Telecommunication networks are evolving into the AI infrastructure enabling billions of devices — from vision AI agents to robots and autonomous vehicles — to see, hear and act in real time,” said Jensen Huang, founder and CEO of NVIDIA. “By turning the 5G network into a distributed AI computer with T-Mobile and Nokia, we’re creating a scalable blueprint for the world’s edge AI infrastructure.”

T-Mobile emphasizes that its nationwide standalone 5G network provides the low latency and reliability required for such applications. “Turning networks into distributed AI computing platforms to unlock the full potential of physical AI will require ultra-low latency and space time coherency at the network edge for billions of endpoints, and that’s what we’ve built at T-Mobile,” said Srini Gopalan, chief executive officer of T-Mobile.

Developers bring physical AI use cases to life

A range of developers — including Fogsphere, LinkerVision, Levatas, Vaidio and Siemens Energy — are already building AI agents using NVIDIA’s Metropolis platform. These applications are being integrated into T-Mobile’s distributed edge network.

Early pilot projects highlight diverse use cases. In smart cities, computer vision agents are being tested in San Jose to optimize traffic and improve incident response times. In utilities, AI-enabled drone inspections are helping detect infrastructure issues faster, while industrial safety systems monitor hazardous environments in real time.

These applications rely on offloading compute-heavy tasks to the network edge, reducing the need for powerful on-device hardware and enabling scalable deployments across large fleets of cameras and sensors.

New blueprint accelerates video AI development

To support these efforts, NVIDIA introduced version 3 of its Metropolis Video Search and Summarization (VSS) blueprint. The updated framework adds multimodal understanding, modular architecture and agentic search capabilities, allowing AI agents to analyze and retrieve video insights in seconds.

With over 1.5 billion cameras deployed globally but less than 1% of footage reviewed, the company sees significant potential. The VSS blueprint can summarize video up to 100 times faster than manual processes, reducing operational costs while improving responsiveness.

As AI-RAN matures, telecom operators, infrastructure vendors and software developers could redefine edge AI deployment and turn networks into active participants in real-time decision-making systems.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s