CMSC 477/677 - Spring 2005
Discussion Questions for Class #4, February 10
Reading: Wooldridge ch. 4, Bratman et al., "Plans and resource-bounded
practical reasoning"
Wooldridge
- What is the difference between deliberation and means-ends
reasoning?
- Discuss the key properties of intentions as presented by Wooldridge:
serving as pro-attitudes, persistence, filtering actions, and influencing
future beliefs.
- What is the difference between a desire and an intention?
- What are intention-belief inconsistency and intention-belief
incompleteness? Do you think that people exhibit these behaviors in their
reasoning? Do you think that agents ever should?
- What is the difference between a plan and an intention
in Wooldridge's framework? What's the difference (if any) between an intention
and a goal in AI planning?
- The notion of a commitment is useful for at least two things: (1)
interacting with other agents (e.g., in joint intentions theory, which we'll
discuss later in the semester) and (2) validating agent designs (i.e., ensuring
that agents really do what they decide to do). But are they really useful
for building agents, or are they just nice theoretical constructs?
- Are the commitment strategies listed on page 77 sufficiently rich
to represent the range of possible commitment strategies? Discuss commitment
(and commitment strategies) in some real-world environments.
- Do the results of Kinney and Georgeff's bold vs. cautious agent experiments
tell us anything non-obvious? Can you think of a more useful experimental
design for exploring the behavior of different commitment strategies?
Bratman et al.
- Setting aside resource-boundedness, what do you think would be some
of the key challenges in building a means-ends agent along the lines described
by Bratman et al.?
- What is the purpose of the compatibility filter in the agent
design proposed here? Why is a filter override mechanism needed?
Could you implement a filter override based on the description given
here? If so, how would you start? If not, why not?
- How would reasoning about uncertainty fit into this architecture?