The AI Hype Machine: Part I
17 Apr 2018

The AI Hype Machine: Part I

Artificial intelligence (AI) is quickly becoming a deeply public, visible phenomenon. Insights from this field have already made their way into almost every facet of our lives, but they’ve done so with such subtlety that you’d be forgiven for not noticing. Open your inbox, click on an Instagram ad, or watch a Netflix suggestion and you’ve interacted with AI.

Despite the increasing prevalence of AI in our daily lives, this nebulous technology has yet to live up to the seemingly outlandish capabilities promised. Over the next few weeks, we’re sharing a series of blog posts exploring where AI technology delivers on its promises and where it fails to live up to the hype, especially as it applies to the risk industry. Today we share a brief overview of the rise and fall of AI enthusiasm over the past few decades.

False Hope & Failed Technology

In the eye of a maelstrom of hype lies artificial intelligence. The monolithic technology that almost no one can define, let alone explain, has inspired equal parts wonder and dread in public discourse. Some say that AI will usher in an era of unsurpassed peace and prosperity, allowing humans to maximize their creative potential. Others believe it will destroy our economy and emphasize the need for policy changes and preparation. Doomsayers insist that once true AI is online, a terminator-like takeover is inevitable.

But, if there are two truths about artificial intelligence that can be drawn from the the last century, it would be that the robot takeover is always coming, and it’s always disappointing us.

An Overpromising Beginning

The Shakespeare reference wasn’t just rhetorical flare. The summer/winter metaphor has been the comparison of choice for describing the cyclical rise and fall of interest in AI. We are now in the third of the AI summers: periods in which public interest is high and research funding flows.

The first of these began in the summer of 1956. As America was in throes of its Elvis craze, a small group of scientists gathered in Hanover for the Dartmouth Summer Research Project. There, Marvin Minsky, John McCarthy, Allen Newell, and other future members of AI’s Mount Rushmore sketched the blueprint for a new discipline. It would be called  “Artificial Intelligence,” and it was going to change the world.

The Dartmouth project produced a wave of excitement that would last almost twenty years, inspire millions in investment, and fuel astounding predictions. In the late fifties, the US Navy believed scientists would soon produce a machine that could “walk, talk, see, write, reproduce itself and be conscious of its existence” (Gideon Lewis-Kraus). In 1965, H. A. Simon said, “machines will be capable, within twenty years, of doing any work a man can do,” and two years later Marvin Minsky declared that in “the problem of creating ‘artificial intelligence’ will substantially be solved” by the late 90s (Daniel Crevier).

By that time, we did have Ok Computer—but nothing close to artificial general intelligence. But it didn’t take that long for people to catch on. By the time the seventies rolled around, the research funding that had once flowed so freely from government organizations like ARPA (now DARPA) evaporated. This was the first of two lulls that would later be known as “AI winters.”

Coined in the early 80s, the term “AI winter” was inspired by the visions of atomic desolation—”nuclear winter”—that fueled nightmares during the Cold War era (Crevier). This first bitter period caused a shift in AI research. Instead of aiming to reproduce a machine analogue of the human mind, AI would solve narrow, well-defined problems—like assisting pilots or selecting construction materials (James S Somers). It also caused a shift in terminology. “AI” was dead. Long live “expert systems.”

Digital Analog

Expert systems used a rule-based approach to the artificial intelligence, and their construction was straightforward:

  1. Define a problem. For example, translation.
  2. Hire domain experts. For example, linguists in languages of interest.
  3. Have domain experts build a set of rules (sometimes in the millions) for a system to follow. For example, if given word x and context z, produce word y.1
  4. Compile rules into a single system. For example, the Auto-Translator 5000.
  5. Feed inputs, get outputs. For example, word x with context z becomes word y.

While experts systems made deep inroads into government organizations and corporations, they too fell victim to their own hype as implementation began revealing severe limitations. Bound by inflexible libraries, they were expensive and difficult (sometimes impossible) to update, time-consuming to build, and could not handle anomalous information or solve difficult problems…like language translation.

There was frustration. There was disappointment. There was no more money. Despite best efforts to actually apply AI in clear, constrained, and practical ways, the tide of sentiment again turned against the technology.

Enter, stage left: AI winter two.

Learn more:

The third wave of AI enthusiasm centered around machine learning application to translation. Check back next week to learn more. Can’t wait that long? Download the complete Honest Guide to AI for Risk now.

The Honest Guide to AI for Risk

1 This was sometimes done via knowledge engineers: technical staff with background in a given area that could translate the knowledge of subject-matter experts into rules.