Intel’s Chief AI Officer Sachin Katti leaves company to join Sam Altman’s OpenAI
Sachin Katti, Intel’s Chief Technology and Artificial Intelligence Officer, is leaving the chipmaker to join OpenAI, the research company behind ChatGPT. Katti said he will focus on building OpenAI’s compute infrastructure for artificial general intelligence (AGI), the long-term effort to develop systems that can reason and learn in ways comparable to humans.
What Katti will build at OpenAI
OpenAI’s next phases of development require massive, reliable compute systems to train and deploy increasingly capable AI models. Katti’s new role centers on designing and scaling the infrastructure that powers this work—spanning data centers, networking, storage, accelerators, and the software stack that orchestrates them. The goal is to improve efficiency, capacity, and performance so OpenAI can push the boundaries of research and deliver more advanced generations of its models.
Reactions to the move
Katti expressed gratitude for his time at Intel, noting his leadership across networking, edge computing, and AI. OpenAI’s leadership welcomed him and highlighted the importance of building world-class infrastructure as a foundation for the company’s research ambitions and product roadmap.
Intel’s transition and priorities
Intel confirmed Katti’s departure and reiterated that AI remains one of the company’s highest strategic priorities. The chipmaker said it will continue executing its product roadmap to support rapidly evolving AI workloads across cloud, data center, and edge environments.
At Intel, Katti was appointed Chief Technology and AI Officer earlier this year and previously led the Network and Edge Group starting in early 2023, where he helped steer the company’s edge and cloud AI initiatives. Intel said leadership of the AI and Advanced Technologies Groups will be maintained as the company advances its strategy.
Intel’s evolving AI strategy
The transition comes as Intel works to rebuild momentum in its AI business. After falling short of a prior revenue target for its Gaudi AI accelerators, the company revamped its data center approach. At the 2025 OCP Global Summit, Intel announced a new 160GB energy‑efficient data center GPU and committed to an annual release cadence emphasizing open, modular architectures for AI systems. These moves are aimed at improving performance-per-watt, expanding memory capacity, and making it easier for partners to integrate Intel hardware into large-scale AI deployments.
Why the move matters
Katti’s expertise sits at the intersection of distributed systems, high-performance networking, and machine learning at the edge—capabilities that are increasingly critical as AI models grow in size and complexity. For OpenAI, strengthening the compute backbone is essential to scaling training runs, accelerating research, and improving reliability and cost-efficiency in production. For Intel, the departure underscores the intense competition for top AI talent as chipmakers and AI labs race to build the infrastructure that will power the next era of computing.
Background on Sachin Katti
A Stanford University professor and former startup founder, Katti has been widely recognized for his contributions to wireless networking and edge machine learning. His academic and industry work has focused on designing systems that move and process data quickly and efficiently—a skill set directly applicable to the challenges of building and optimizing large-scale AI compute.
Katti’s move highlights the broader industry trend: as AI models and their training runs scale up, the bottleneck increasingly shifts from algorithms alone to the underlying infrastructure. The next breakthroughs will rely as much on systems engineering and hardware-software co-design as on model architecture. OpenAI’s recruitment reflects that reality, while Intel’s continued investments show how central AI workloads have become to the future of silicon and data center design.