A Field Born in a Summer of Optimism
In the mid‑1950s, a small group of scientists gathered at Dartmouth College with an audacious idea: intelligence itself could be engineered. They built on earlier philosophical work on logic and Alan Turing’s theory of computation, which showed that simple symbol‑manipulating machines could, in principle, perform any mathematical reasoning.
Early programs dazzled the press. Machines learned checkers strategies, solved algebra word problems, proved logical theorems, and even spoke simplified English. New laboratories sprang up across the US and UK. Leading researchers such as Herbert Simon and Marvin Minsky confidently predicted that within a generation machines would be able to do any work a human could do.
AI Winters: When the Money and Faith Ran Out
The reality was harsher. Techniques that worked on toy problems crumbled on messier, real‑world tasks. Critics like Sir James Lighthill argued that promised breakthroughs were nowhere in sight. By 1974, US and British governments slashed support. This first "AI winter" froze funding and careers.
A partial revival came in the early 1980s with expert systems—software that encoded the knowledge of human specialists into rules. For a moment, AI was a billion‑dollar market. Governments launched grand computing initiatives. Then the bottom fell out again. Hardware built for these systems lost its niche and a second, longer AI winter set in.
Sub‑Symbolic Revolt and the Return of Neural Nets
Hidden in this cold period were the seeds of the next revolution. A new wave of researchers doubted that high‑level symbols and logic alone could capture messy processes like perception or pattern recognition. They turned to "sub‑symbolic" methods.
Rodney Brooks built robots that relied less on abstract representations and more on direct interaction with the world. Judea Pearl and Lotfi Zadeh developed ways to handle uncertainty and vagueness instead of insisting on crisp logical truths. Most crucially, Geoffrey Hinton and others revived neural networks—mathematical models loosely inspired by brain cells.
In 1990, Yann LeCun showed that convolutional neural networks could reliably read handwritten digits, hinting that these methods might scale.
Deep Learning Explodes
Around 2012, everything changed. The ideas behind deep learning weren’t new, but now there were two ingredients they had always lacked: enormous curated datasets and staggering compute power, especially from graphics processing units (GPUs). Deep neural networks suddenly crushed previous techniques on image recognition and other benchmarks.
By the late 2010s, deep learning dominated the field. Many older approaches were quietly abandoned for specific tasks. Industry poured money into AI: by around 2022, roughly US$50 billion per year was being invested in the US alone, with a fifth of new computer science PhDs specializing in AI.
The AGI Era and Public Breakout
At the same time, a new subfield—artificial general intelligence (AGI)—took shape, determined to revive the original dream of broadly capable machines. Companies such as DeepMind, OpenAI, and Meta set explicit AGI goals and attracted major funding.
High‑profile milestones followed. In 2015, DeepMind’s AlphaGo taught itself to master Go, defeating world champions after learning only the rules. In 2020, OpenAI’s GPT‑3 showed that language models could generate strikingly human‑like text. Then, in November 2022, ChatGPT burst into public consciousness, becoming the fastest‑growing consumer app in history and marking a new, frenzied AI boom.
The field that once struggled for grants had become a global obsession—yet its early overconfidence and painful winters still haunt debates about how fast, and how far, to push the technology.
Takeaway
AI’s story is not a smooth ascent but a cycle of wild hopes, harsh disappointments, and sudden breakthroughs. Understanding that turbulent history is essential to judging today’s claims about what machines can—and should—be allowed to do next.