CMSC 491M/691M - Spring 2003
Discussion Questions for Class #12, March 5
Reading: Brooks ("...representation," "...reason")
There are lots of interesting philosophical and conceptual issues raised
by these papers, though they're not technically very deep. Here are just
a few things we might want to talk about in class:
Also, just as a side comment, note the diversity of terminology that is
used to refer to "Brooks-style" approaches ("...representation" p. 2): reactive
planning, robot beings, artificial creatures, active vision, animate vision,
agents, behavior-based systems, robots, and subsumption architectures.
The "representation" paper lists four key aspects of "Brooks-style" approaches: situatedness,
embodiment, intelligence, and emergence. Are these issues inherently intertwined?
Can they be studied in isolation?
In Section 7.1 of the "...reason" paper, Brooks says "we believe representations
are not necessary and appear only in the eye or mind of the beholder."
If one believes that Brooks's approach is a good model of what goes on
in the human brain, this implies that human brains do not have representations,
but that we nonetheless perceive them to have representations. How
can one perceive a representation if one doesn't have representations?
Furthermore, much of our interaction with the world, at least with other
intelligent agents (i.e., other people) is linguistic, which is inherently
representational. What does a lack of representation mean for linguistic
Brooks says that "Most of what people do in their day to day lives is not
problem-solving or planning, but rather it is routine activity in a relatively
benign, but certainly dynamic, world." (...representation" p. 2) He uses
this as an argument for working on these lower-level forms of intelligence.
Yet one could argue that the "routine" activities are capable of being
performed by all higher animals, and that it's exactly the problem-solving
and planning activity that makes human intelligence unique. Do you think
that systems that focus on lower-level intelligence will scale up to human-level
or human-style intelligence? Why or why not?
Brooks argues for a "bottom-up" view to designing autonomous robots. A
counter-argument might go like this: Why start this bottom-up process with
behaviors? Why not with neurons? Organic chemistry? Molecular biology?
Do you think that Brooks-style bottom-up design could someday "meet" McCarthy-style