Artificial Intelligence: Defining Our Terms
Continuing the story of the current AI revolution, and how we got here.
Like Swaine’s World? Scroll to the bottom of this post to tip the author or to subscribe.
THE WEEK
Image of the Week
As usual, the featured picture has nothing to do with the topic of the post. It justifies its presence by being an image of Swaine’s world. This particular image is a photo of the herb garden of Summer Jo’s Farm, Garden, and Restaurant, the business my partner Nancy owned and ran from 1999–2013. We took the picture from a hot-air balloon.
ARTIFICIAL INTELLIGENCE: DEFINING OUR TERMS
AI. Artificial Intelligence. GAI. Generative Artificial Intelligence. AGI. Artificial General Intelligence. If you follow technology at all, you’ve been inundated by a flood of news stories about AI. Even if you’re not following technology, you haven’t been able to avoid getting some of it splashed on you. We’re in a new era of (at least articles about) Artificial Intelligence.
And when we talk or read about AI today, we’re mostly talking or reading about Machine Learning. For decades, though, Machine Learning was just one research area out of many in the general field of AI research, and not the most prominent or the best funded.
I want, in this series of blog posts, to recount the history of Machine Learning, but I don’t think I can even begin effectively without situating Machine Learning in that broader AI context. So I’m going to take a brief survey of the history of AI generally.
Usually a solid place to start any essay, and I’m going to claim that that’s what this post is, is by defining our terms. So here’s a definition of Artificial Intelligence.
“Artificial Intelligence is any instance of an artifact exhibiting behavior that we would regard as requiring intelligence if performed by a human.”
I made that up, but it borrows from other definitions that have been offered, and is pretty close to the original definition given by the inventor of Artificial Intelligence, John McCarthy: “making machines behave in ways that would be called intelligent if a human were so behaving.”
(That offhand claim that McCarthy was the “inventor of Artificial Intelligence” deserves to be explored further, because it leads to some important insight into the direction AI research took right from the start. I’ll get back to that later. Probably next week.)
McCarthy had the right to define the field, and I think his definition has held up well. I do think “artifice” is better in this context than “machine,” though, since it more obviously includes software and other potent creations we may come up with in the future. And also… but I’ll get back to that.
McCarthy’s definition and the variants of it that have been posed, including mine, have proven not to be a very good definition in some ways, but a definition along these lines is useful. For one thing, it doesn’t dodge the fear factor.
A topic I’ve been thinking a lot about lately is the fear of Artificial Intelligence. (I talked about this three weeks ago here.) The fear of being replaced, in our employment (automation) or in our relationships (robotic pets or companions or caretakers), the fear of robots acting without moral constraints, the fear of losing purpose (are we needed? and will AIs decide we are not?)
The fears are both old and new.
The new fears, both of AI eliminating jobs and of AI destroying civilization in any of a number of not-altogether-implausible ways, you know about those fears: they come up in every other article about AI today.
The old fears are not hidden: they can be found in both historical and literary records: the story of the Luddites and their rebellion against automation as told in Brian Merchant’s excellent Blood in the Machine, or the classic fictional story of the creation of an artificial — that is, made by artifice — intelligent being in Mary Wollstonecraft Shelley’s Frankenstein; or, the Modern Prometheus.
“Any instance of an artifact exhibiting behavior that we would regard as requiring intelligence if performed by a human” could, Shelley tells us, be a monster. So the definition has that going for it: it reminds us to be on the lookout for monsters. Which I more delicately referred to above as “other potent creations we may come up with in the future.” AI doesn’t have to mean computers, or robots. We’re just at the beginning of machine-human hybrids. Who knows what meat machines may be in our future? Monsters.
Whew. I scared myself there. Back to the definition:
That definition was a curse for those of us following and trying to write about AI over the decades, as well as for those actually doing the research and developing the systems and products. The reason: Throughout the history of AI, whenever a problem was solved, it became no longer considered AI; it had become just software. Whenever an activity would succumb to automation, everyone would decide that it didn’t require human intelligence after all, so it didn’t qualify as AI. By definition.
All that said, the definition does generally describe the research areas that have come under that umbrella term Artificial Intelligence. So let’s at least itemize those research areas, both for historical interest and because they all still pose open questions that remain of research interest.
In a broad sense, the scope of all this research was set at a conference in the summer of 1956 at Dartmouth College in New Hampshire. At this conference, according to its funding proposal, “an attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Language, forming concepts, problem solving, and that intriguing “improve themselves.” Sounds like machine learning to me. One of the proposers of this conference was Dartmouth Assistant Professor of Mathematics John McCarthy, and he called the subject of the conference “Artificial Intelligence,” the first use of the term.
The conference kicked off research in many universities, and by 1980, the research areas in Artificial Intelligence could be fairly clearly defined and differentiated.
Expert systems encoded knowledge of experts like doctors and sought to make predictions and diagnoses and to assist the experts. Expert systems got a lot of the grant money and attention, in part because they promised some of the most immediate and practical results. There were also research programs in the “understanding,” narrowly defined, of spoken or written language. There were systems for computer vision that performed feats like answering simple questions about simple visual scenes, like a world of stacked blocks. There were attempts to model planning and problem solving, to perform logical deduction, to generate computer programs from specifications. There was robotics research.
And there were research programs in machine learning, including learning from advice and learning from examples. But it was not at all obvious at the time that Machine Learning would turn out to be the breakthrough research program.
In upcoming posts I’ll discuss some or all of these topics: the Dartmouth Conference, the outsized influence of two strong personalities on the direction of AI research, the two main historical threads of AI research, the significant breakthroughs, insights drawn from neuroscience, and the emergence of the key technologies underlying Machine Learning.
Plus more about fear.
BEFORE YOU GO…
The Pragmatic Bookshelf
Blogroll
AI Supremacy
Ahead of AI
Mark Watson’s AI Books and Blog
Doctors Without Borders
World Central Kitchen
Kent Beck’s advice for geeks
Tales from the Jar Side
Bookshop.org
New York Review of Books
Pragmatic Bookshelf
ICYMI
Thanks for reading. You can read all the back issues of Swaine’s World at my blog home.
Coming Attractions
In the coming weeks, more on the story of the AI revolution and how we got here.