AI breakthroughs make headlines, but for Vishal Kirti blazing new trails begin with the silicon that makes them possible. As a Senior ASIC Leader at Cisco Systems, he works in the space where physics defines the limits, materials set the rules, and engineering decides how far intelligence can scale.
Senior ASIC Leader at Cisco
When you talk to Vishal Kirti about the importance of semiconductors in relation to data centers, he doesn’t start with futuristic visions, but with math and physics. Imagination doesn’t matter unless there’s silicon to support it.
With almost 18 years of Silicon Valley experience building networking silicon for Cisco Systems, his work ensures the kind of high-speed data movement that the modern digital world depends on. If you think of our hyperconnected reality as a nervous system, Vishal and his colleagues build the neurons that make it tick. There’s nothing artificial about that; it’s all very real.
Different kinds of silicon make up the backbone of modern data centers: hardware accelerators, CPUs, GPUs as well as the networking silicon that moves data between them. As workloads evolve, semiconductor requirements inside data centers evolve, too. The goal is to tailor them to the job at hand, and the challenge is to get the best performance out of the silicon. Vishal explains:
No software can run without solid hardware. That sets the scene. In the past, the emphasis was on CPUs, but now workloads are changing and we’re in a phase of transition with focus on GPUs and TPUs. What’s required are new levels of power, latency, and throughput. So, companies are busy building their own accelerators, shaped around the way AI behaves. Meta is building MTIA, Microsoft is building MAIA, and Amazon’s building Trainium. At Cisco, we strive to make sure that accelerators can talk to each other. In a typical AI data center, GPUs within a rack are connected to each other via a Scale Up network. When we scale out, we connect racks inside a data center. And when we scale across, we connect one data center with another, sometimes over big distances.
Companies are placing data centers wherever they can find enough power. That might mean one site in Oklahoma, another in Ohio and a third in Mexico City. The same model might be trained across all of them, but to make that work, you need a ‘scale across’ network with high-bandwidth memory.
The same type of chip could, theoretically, be used for everything in a data center. CPUs can do almost any task if you give them enough time. But that is exactly the problem. Vishal elaborates:
“You could argue that you just put CPUs everywhere and did your hardware acceleration there. It would get it done. They can do anything you ask of them. It's just that it'll be very slow. You will not get the kind of models you get today with Gemini 3 and GPT-5 Pro if you were training them on CPUs, because training itself would take years. And that's the reason why custom silicon has come into the picture.”
The industry moved first to GPUs, and then to custom ASICs like Google’s TPUs. These accelerators are built around the math that underpins deep learning. Software compilers break a model down into graphs and kernels that map directly onto the hardware blocks etched into silicon. It’s tight co-design between software and hardware. And as Vishal says:
“Everybody is in the custom silicon business now. Also us. Our Cisco Silicon G200 is a great example. Built specifically for ethernet networking in AI web scale data centers, it supports a maximum throughput of 51.2 terabits per second, which means it’s designed for some of the densest, most demanding workloads.”
Because materials determine transistor behavior, thermal dissipation, interconnect reliability, packaging density, power leakage, and long-term reliability, materials innovation plays a major part in enabling the required leaps forward. It's not all about architecture or compute. Vishal continues:
“Architecture is not physics. Material is physics. And you cannot bend or break the laws of physics. If you don't have the right material, you can't build what you want to build – no matter how great the architecture may be. That alone makes materials extremely important for the entire discussion about semiconductors and data centers. You need the right material for the transistor channel, like strained silicon and silicon germanium. But also for micro bump, hybrid bonding, 2.5D-3D packaging. Everything is driven by material. Things will downright melt if the material isn't right.”
Chip development is an unforgiving discipline. A single bug can force a retape-out and cost millions. That reality has shaped Vishal’s career and leadership.
“I am a big fan of Andy Grove,” he says, holding up his well-thumbed copy of High Output Management. Grove’s other classic, Only the Paranoid Survive, could just as well sit on his desk.
“In silicon you cannot take anything for granted,” Vishal explains. “It’s tough and you need a healthy dose of paranoia to anticipate everything. If you are chilled and say, ‘it looks fine, let us tape out’ you will get in trouble.”
At the same time, he resists leadership clichés about always being hands-off or always micromanaging. Instead, he talks about his philosophy of ‘task-based maturity’. If someone or a team is doing something for the first time, he stays close. If they have done it many times and shown good judgment, he gives them room.
No single person can own everything in chip design. The work is highly complex and interdependent, which means that Vishal has spent a great deal of his career fostering the right culture.
“If you have people with high self-interest and very complex tasks, nothing will work,” he says. “Instead, you need a culture of low self-interest and high collaboration. People need to feel that we are in it together, and that you will not throw them under the bus when something goes wrong. This is Silicon Valley. At some point you will encounter bumps. We explore the edge of technology here. It’s not routine. It’s all about how you face challenges, and how you overcome them together.”
It is not just theory. Speaking from experience, Vishal is quietly proud of the fact that he and his teams have avoided catastrophic bugs that forced retape-outs.
I try to educate my people at the same time as I set them free. In a multibillion-dollar industry such as this one, it comes down to that. Not everyone enjoys full system-level thinking, right? Not everyone enjoys digging deeper and deeper and deeper until they know more than anyone on the planet about, say, cryptographic algorithms or the best way to build the RTL for that. Leadership is about recognizing the strengths in your people, knowing who will thrive working across the network versus who will be perfect for focusing on one block in your ASIC.
Ask Vishal what he is proudest of, and he does not point to a single product, but instead to longer arcs of work.
“The first was Doppler, a Cisco networking architecture that became the foundation for multiple generations of chips when it came out in 2012, I started on the project as a young engineer after grad school and experienced how networking ASICs were built from scratch, eventually owning the brain part of its derivative chips,” he says.
If Doppler showed Vishal what was possible, Silicon One showed him what was next. He goes on:
“Then we moved on to Cisco Silicon One, which was a new architecture. One of my colleagues architected and I designed one of the very complicated algorithmic TCAMs which went into many of the chips in the Cisco Silicon One chip family. And then I have to say that playing a part in designing the Cisco Silicon One G200 that’s being used by web scalers like Meta and Microsoft has been a big thing. It’s always a long game. You do the architecture, you tape out, you wait for the market to react. The recognition often comes years later. The chip came out in 2023, and we just won the Cisco Pinnacle Award this year, so it’s quite a big feather in our cap. Finally, I played a major role for the NPU of the recently announced 102.4T G300 chip for Scale out AI networks and that I consider a big accomplishment because we are the second company in the entire world to be able to go beyond the 100T barrier.”
Some people say that Silicon Valley is not just a location but a mentality. The concentration of expertise, the culture of fast iteration, and the tolerance for big technical bets is unique. Vishal agrees:
“You can learn the technical know-how of how to write the RTL and how to do the verification of chips everywhere. But having the courage to go from incremental changes to audacious innovation – that comes with the territory of the Valley. That kind of foresight, courage and willingness to rebuild, to acquire, to rethink your architecture because you sense where the market will go. That to me is very much a Silicon Valley ethos.”
Chip design is less about white coats and oscilloscopes than about disciplined software work and good mental models of systems you cannot see directly.
“It’s a common misconception that when we're building chips we sit in a lab, tinkering with hardware. That’s not true. What we do all day is actually to look at code, verify code, synthesize code, and layout code. Everything happens on the computer. Of course we have hardware guys who put the board together, mount the chip, and make sure that the connections are made. But once everything's set up and hooked up to the networks, we can do pretty much everything, even the testing of the chip, remotely.”
So, what does all this mean for the next generation of AI data centers? Vishal’s answer circles back to experience. Each generation has taught him to respect physics, obsess over verification, rely on the right materials, and protect the culture of the teams that do the hard work.
And that is the real story behind the technology which is currently shaping modern AI. It is about people like Vishal Kirti who think in systems, are passionate about details, and quietly rewire what data centers are capable of. And just as you can’t spell brain without AI, you can’t build AI without the brains who understand the silicon beneath.
Vishal Kirti leads ASIC development at Cisco Systems, designing high-performance networking chips that drive global data centers and AI infrastructure. A key contributor to Cisco’s Silicon One platform, including the 51.2 Tbps G200, he combines systems-level understanding and engineering expertise shaped by degrees from IIT Madras and the University of Wisconsin–Madison.
Explore the edge
Mind-boggling and horizon-broadening, Uniquely Wired is your access to interviews with thought leaders from the brands that define tomorrow. Pick their brains now.
- Articles
Do you want to feature in the next edition of Uniquely Wired? Our editors are always looking for new stories. Is yours exciting?