IBM CEO Arvind Krishna doesn’t believe today’s artificial intelligence is on a direct path to artificial general intelligence (AGI). Despite IBM’s long history in computing—from foundational 20th-century technologies to the more recent Watson supercomputer—Krishna acknowledges past missteps and explains why the current shift toward generative AI is both promising and fundamentally different.
The Evolution of AI at IBM
For decades, IBM has been a major player in AI research. Watson’s 2011 Jeopardy! victory showcased early natural language processing, but Krishna admits that pushing Watson into healthcare too soon was “inappropriate.” The initial approach was too monolithic; engineers craved modularity and customization.
The key difference now is the shift from bespoke deep learning models—which required massive, labeled datasets and constant retraining—to large language models (LLMs). LLMs leverage brute force compute to achieve a 100x improvement in speed, tuning, and deployability.
The LLM Inflection Point
While Google pioneered some of the underlying LLM technology (like the “Attention is all you need” paper), the industry’s pivot toward LLMs was a crucial turning point. Krishna points out that LLMs reduce the need for constant human labeling, making them significantly more scalable and adaptable.
This shift is not without costs. The infrastructure for LLMs requires massive investments in GPUs and data centers, and returns aren’t guaranteed. However, Krishna believes that semiconductor advancements and alternative architectures (like those from Groq and Cerebras) will drive down costs over the next five years.
Beyond LLMs: The Quantum Bet
Despite the current hype around LLMs, Krishna emphasizes that this is not the “end all.” IBM continues to invest heavily in quantum computing, recognizing that LLMs are just one step in a longer technological evolution. Quantum computing remains a long-term bet, but Krishna believes it holds the key to solving problems beyond the reach of classical AI.
Ultimately, IBM is positioning itself for a future where AI is not just about faster computation, but about fundamentally new capabilities.
The transition is expensive and uncertain, but Krishna remains confident that IBM will not be left behind.
