CMSC 477/677 - Spring 2005
Discussion Questions for Class #11, March 8
Reading: Society of Mind review; Minsky, Society of Mind excerpts
- Is Minsky's notion of an agent the same as the concept of agent
we have been discussing in this class (that is, to the extent that we can
agree on what an agent is!)?
- What is emergence in Minsky's paradigm? How is emergence in
the Society of Mind similar or dissimilar to Brooks's emergent intelligence?
More generally, what do you think would happen if Brooks and Minsky were
to get into a debate about intelligence? Would they generally agree or disagree?
- Do you think Minsky is proposing his Society of Mind primarily as a
way to understand human intelligence, as a model for building intelligent
systems, or just out of a sense of whimsy? Notice that he uses the blocks
world as an example. Do you think the Society of Mind approach would scale
to real problems? Why or why not?
- Minsky argues that "...we're least aware of what our minds do best"---i.e.,
common-sense reasoning---and implies that designing common-sense reasoners
is at the core of AI. Brooks argues that embedded/situated perception and
motion is at the core of AI. Presumably McCarthy would argue that high-level,
abstract reasoning (i.e., "cognition") is at the core of AI. What do you
think?
- Minsky's design exploits the massive parallelism of the human brain.
Do you think it's possible that we won't be able to solve AI until we model
that massive parallelism in hardware? Or can we just simulate massive parallelism?
Or do we not need massive parallelism at all?
- Here are some interesting phenomena that Minsky identifies in his Society
of Mind. Can you think of situations where your reasoning process
seemed to exhibit some of these phenomena? (or where your reasoning process
seemed inconsistent with these phenomena?)
- Conflicts between agents migrate upwards.
- Noncompromise weakens agents.
- Difference engines drive agents to reduce differences between actual
and desired inputs.