Cisco has entered an increasingly competitive race to dominate AI data centre interconnect technology, becoming the latest major player to unveil purpose-built routing hardware for connecting distributed AI workloads across multiple facilities.
The networking giant unveiled its 8223 routing system on October 8, introducing what it claims is the industryβs first 51.2Β terabit per secondΒ fixed router specifically designed to link data centres running AI workloads.Β
At its core sits the new Silicon One P200 chip, representing Ciscoβs answer to a challenge thatβs increasingly constraining the AI industry: what happens when you run out of room to grow.
A three-way battle for scale-across supremacy?
For context, Cisco isnβt alone in recognising this opportunity. Broadcom fired the first salvo in mid-August with its βJericho 4β StrataDNX switch/router chips, which began sampling and also offered 51.2 Tb/sec of aggregate bandwidth backed by HBM memory for deep packet buffering to manage congestion.
Two weeks after Broadcomβs announcement, Nvidia unveiled its Spectrum-XGS scale-across networkβa notably cheeky name given that Broadcomβs βTridentβ and βTomahawkβ switch ASICs belong to the StrataXGS family.
Nvidia secured CoreWeave as its anchor customer but provided limited technical details about the Spectrum-XGS ASICs. Now Cisco is rolling out its own components for the scale-across networking market, setting up a three-way competition among networking heavyweights.
The problem: AI is too big for one building
To understand why multiple vendors are rushing into this space, consider the scale of modern AI infrastructure. Training large language models or running complex AI systems requires thousands of high-powered processors working in concert, generating enormous amounts of heat and consuming massive amounts of electricity.Β
Data centres are hitting hard limitsβnot just on available space, but onΒ how muchΒ power they can supply and cool.
βAI compute is outgrowing the capacity of even the largest data centre, driving the need for reliable, secure connection of data centres hundreds of miles apart,β said Martin Lund, Executive Vice President of Ciscoβs Common Hardware Group.
The industry has traditionally addressed capacity challenges through two approaches: scaling up (adding more capability to individual systems) or scaling out (connecting more systems within the same facility).Β
But both strategies are reaching their limits. Data centres are running out of physical space, power grids canβt supply enough electricity, and cooling systems canβt dissipate the heat fast enough.
ThisΒ forces a third approach: βscale-across,β distributing AI workloads across multiple data centres that might be in different cities or evenΒ differentΒ states. However, this creates a new problemβthe connections between these facilities become critical bottlenecks.
Why traditional routers fall short
AI workloads behave differently from typical data centre traffic. Training runs generate massive, bursty traffic patternsβperiods of intense data movement followed by relative quiet. If the network connecting data centres canβt absorb these surges, everything slows down, wasting expensive computing resources and, critically, time and money.
Traditional routing equipmentΒ wasnβt designedΒ for this. Most routers prioritise either raw speed or sophisticated traffic management, but struggle to deliver both simultaneously while maintaining reasonable power consumption. For AI data centre interconnect applications, organisations need all three: speed, intelligent buffering, and efficiency.
Ciscoβs answer: The 8223 system
Ciscoβs 8223 system represents a departure from general-purpose routing equipment. Housed in a compact three-rack-unit chassis, it delivers 64 ports of 800-gigabit connectivityβcurrently the highest density available in a fixed routing system. More importantly, it can process over 20 billion packets per second and scale up to three Exabytes per second of interconnect bandwidth.
The systemβs distinguishing feature is deep buffering capability, enabled by the P200 chip. Think of buffers as temporary holding areas for dataβlike a reservoir that catches water during heavy rain.Β When AI training generates traffic surges, the 8223βs buffers absorb the spike, preventing network congestion that would otherwise slow down expensive GPU clustersΒ sittingΒ idle waiting for data.
Power efficiency is another critical advantage. As a 3RU system, the 8223 achieves what Cisco describes as βswitch-like power efficiencyβ while maintaining routing capabilitiesβcrucial when data centres are already straining power budgets.
The system also supports 800G coherent optics, enabling connections spanning up to 1,000 kilometres between facilitiesβessential for geographic distribution of AI infrastructure.
Industry adoption and real-world applications
Major hyperscalers are already deploying the technology.Β Microsoft, an early Silicon OneΒ adopter, has found the architecture valuable across multiple use cases.Β
Dave Maltz,Β technical fellowΒ andΒ corporate vice presidentΒ of Azure Networking at Microsoft, noted that βthe common ASIC architecture has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments.β
Alibaba Cloud plans to use the P200 as a foundation for expanding its eCore architecture. Dennis Cai, vice president and head of network Infrastructure at Alibaba Cloud, stated the chip βwill enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices.β
Lumen is also exploring how the technology fits into its network infrastructure plans. Dave Ward, chief technology officer and product officer at Lumen, said the company is βexploring how the new Cisco 8223 technology may fit into our plans to enhance network performance and roll out superior services to our customers.β
Programmability: Future-proofing the investment
One often-overlooked aspect of AI data centre interconnect infrastructure is adaptability. AI networking requirements are evolving rapidly, with new protocols and standards emerging regularly.Β
Traditional hardware typically requires replacement or expensive upgrades to support new capabilities. The P200βs programmability addresses this challenge.Β
Organisations can update the silicon to support emerging protocols without replacing hardwareβimportant when individual routing systems represent significant capital investments and AI networking standards remain in flux.
Security considerations
Connecting data centres hundreds of miles apart introduces security challenges. The 8223 includes line-rate encryption using post-quantum-resilient algorithms, addressing concerns about future threats from quantum computing.Β Integration with Ciscoβs observability platformsΒ providesΒ detailed network monitoringΒ to identifyΒ andΒ resolve issues quickly.
Can Cisco compete?
With Broadcom and Nvidia already staking their claims in the scale-across networking market, Cisco faces established competition.Β However, the companyΒ bringsΒ advantages: a long-standing presence in enterprise and service provider networks, the mature Silicon One portfolio launched in 2019, and relationships with major hyperscalers alreadyΒ usingΒ its technology.
The 8223 ships initially with open-source SONiC support, with IOS XR planned for future availability. The P200 will be available across multiple platform types, including modular systems and the Nexus portfolio.Β
This flexibility in deployment options could prove decisive as organisations seek to avoid vendor lock-in while building out distributed AI infrastructure.
Whether Ciscoβs approach becomes the industry standard for AI data centre interconnect remains toΒ be seen, but the fundamental problem all three vendors are addressingβefficiently connecting distributed AI infrastructureβwill only grow more pressing as AI systems continue scaling beyond single-facility limits.Β
The real winner may ultimately be determined not by technical specifications alone, but by which vendor can deliver the most complete ecosystem of software, support, and integration capabilities around their silicon.
See also: Cisco: Securing enterprises in the AI era
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

