CMSC 491M/691M - Spring 2003
Discussion Questions for Class #13, March 10
Reading: Agre and Chapman (Pengi), Arkin and Balch (AuRA).
Following in Brooks's footsteps, Agre and Chapman claim that
"...activity mostly derives from very simple sorts of machinery..."
Do you think this is true, at least for some tasks? If so, for what
sorts of tasks is it true?
Does playing the game of Pengo require any deliberative strategy
(i.e., reasoning ahead to the consequences of one's actions, or
formulating a systematic plan to achieve a set of goals)? How does
its characteristics in this regard make it an appropriate domain for
the type of reasoning mechanisms used in Pengi?
The Pengi implementation assumes that the environment is fully
accessible. What problems might arise if the environment were only
partially accessible (e.g., if the penguin couldn't see through
blocks)? What extensions to Pengi would be necessary?
Do you think the indexical-functional representations that are the
basis of Pengi's reasoning system could be usefully incorporated into
a deliberative reasoning system (i.e., a "classical AI planner")?
Pengi uses representations like "the-bee-that-is-chasing-me." What do
you think would happen if there were two bees chasing it?
Agre and Chapman say that "Avoiding the representation of individuals
bypasses the overhead of instantiation." One might argue that the
instantiation overhead is hardwired into the system (i.e, in the
computation of the indexical-functional), therefore the system really
is doing instantiation, but in a very non-generalizable way. What are
the tradeoffs implicit in this debate?
Learning and Knowledge Engineering
People can learn to play new video games. Do you think Pengi could be
modified to learn to play a new domain? Why or why not?
Agre and Chapman state that "an agent executing a plan is inflexible."
Is this necessarily the case?
Basic concepts: What's a motor schema? What's an assemblage?
How are motor schemas integrated within an assemblage? How are
Brooks's subsumption architecture uses suppression and inhibition to
coordinate layers. Motor schemas use weighted vector addition. Can
you think of domains or applications that would be particularly suited
for one or the other of these coordination mechanisms?
What kinds of learning mechanisms can you imagine for a robot based on
AuRA is very tailored to robot behavior, and specifically robot
navigation. Do you think that AuRA's hybrid architecture would be
relevant for other types of tasks (e.g., robot activity using
effector systems other than movement; high-level reasoning tasks like
planning a trip; dynamic situation response tasks like air traffic
control; mixed reasoning/effecting tasks like shelving books in a
AuRA purports to be a hybrid deliberative/reactive architecture.
Would you characterize it as primarily reactive, primarily
deliberative, or an equal mix?
Would you call the kind of reasoning that AuRA uses in its "Mission
Planner" planning? What issues would there be in integrating more
sophisticated planning methods (e.g., partial-order planning, HTN
planning) into the AuRA architecture?
Could you integrate Pengi's indexical-functional representation into
the AuRA architecture? What would the challenges be?