CMSC 671 - Homework #5
Out 11/5/03, due 11/19/03
1. Semantic Networks (15 pts.)
Represent the following KB using a semantic network:
-
201 is a programming class.
-
202 is a programming class.
-
341 is a programming class.
-
341 is a theory class.
-
STAT 355 is a statistics class.
-
441 is a theory class.
-
471 is an AI class.
-
477 is an AI class.
-
Programming, theory, and AI classes are all computer science classes.
-
Prerequisites are as follows: 201 < 202 < 341 < 471.
-
Recommended class orders are as follows: 441 < 471, STAT 355 < 471,
471 < 477.
-
Computer science classes are hard.
-
Programming classes are easy.
-
Theory classes are hard.
How would a semantic network be used to answer the following questions?
Discuss how alternative implementations might yield different answers,
and what other information might be needed in order to provide an unambiguous
question.
-
Is 471 hard?
-
Is 341 hard?
-
Is STAT 355 hard?
-
Is 201 a prerequisite for 471?
2. Planning Representations (20 pts.)
Our agent is hungry and unhappy, and wants to go on a vacation, sunbathe,
and eat well. Our agent starts out rich, but only has limited options,
some of which will consume a substantial part of its wealth.
-
The static predicates for the domain are:
-
have(car)
(The agent has a car)
-
distance(x,y,d) (The
distance from x to y is d; distance is assumed to be reflexive; i.e., distance(x,y,d)
<=> distance(y,x,d))
-
temp(x,t)
(The temperature at location x is t)
-
in(r,x)
(Restaurant r is in city x)
-
beach(x)
(There is a beach in location x)
-
fastfood(r)
(r is a fast-food restaurant)
-
swanky(r)
(r is a swanky restaurant)
-
The dynamic predicates for the domain are:
-
at(x,s)
(The agent is at x in situation s)
-
happy(s)
(The agent is happy in situation s)
-
hungry(s)
(The agent is hungry in situation s)
-
rich(s)
(The agent is rich in situation s)
-
(Note that the situation variables will only be used in part (a) of this
problem.)
-
The constants in the domain are:
-
Balt
(Baltimore)
-
Berm
(Bermuda)
-
OC
(Ocean City)
-
McDs
(McDonald's)
-
BK
(Burger King)
-
CTC
(Chez Tres Cher)
-
MC
(Molto Caro).
-
The initial state is:
-
at(Balt, s0) ^ ~happy(s0) ^ hungry(s0) ^ rich(s0).
-
The static predicates in the KB are:
-
distance (Balt,OC,150)
-
distance(Balt,Berm,500)
-
in(McDs,Berm)
-
in(BK,OC)
-
in(CTC,Berm)
-
in(MC,OC)
-
fastfood(McDs)
-
fastfood(BK)
-
swanky(CTC)
-
swanky(MC)
-
have(car)
-
beach(OC)
-
beach(Berm)
-
temp(Balt,50)
-
temp(OC,60)
-
temp(Berm,80)
-
The actions in the domain are as follows:
-
drive(x,y) : The agent can drive from location x to location y if it has
a car and the distance from x to y is less than or equal to 200.
-
fly(x,y) : The agent can fly from location x to location y if it is rich
and the distance from x to y is greater than 200. Flights are expensive,
so after flying, the agent is no longer rich.
-
sunbathe(x) The agent can sunbathe if it is in a location (x) with a beach.
The agent will end up happy after sunbathing if the temperature is above
75, but unhappy if the temperature is below 75.
-
eat(r): If the agent is hungry [i.e., hunger is a precondition], it can
eat at a restaurant r that is In the same location as the agent. The agent
will end up not hungry, and will be happy if the restaurant is a swanky
restaurant. However, it can only eat at a swanky restaurant if it is rich,
and at the end of the action, will not be rich any more. (Fast food restaurants
don't require richness, and don't change the status of the richness predicate.)
(a) (5 pts.) Describe only the Happy predicate using
the situation calculus. You should have one or more possibility axioms
(one for each relevant action) and one or more successor-state axioms
(one for each relevant action). These axioms should characterize how the
state of the Happy predicate changes as a result of the actions in the
domain, in terms of the domain predicates listed above.
(You may want to refer to pages 330-333 in the book.)
(b) (5 pts.) Describe the actions in this domain as STRIPS operators.
Be sure to include all preconditions, add lists, and delete lists.
(See pages 377-378.)
(c) (5 pts.) Show two different legal plans (sequences of actions) for
achieving the goal described above from the given initial state.
(d) (5 pts.) How many legal plans are there for this goal? Explain your
answer.
3. Partial-Order Planning (25 pts.)
Suppose that the agent starts building a partial-order plan to achieve
the goal in the domain from problem #2. The agent decides to drive to Ocean
City and eat at Burger King. Draw the partial-order plan at this point
in the planning process. You do not need to show the dependencies associated
with static predicates. Show all dependencies (ordering links and causal
links) associated with dynamic predicates. Ordering links should be drawn
as a thin, single arrow; causal links, as a double or thick arrow.
Now suppose that the agent decides to satisfy its Happiness goal using
the Sunbathe action. Insert this action into the plan, showing all dependencies.
Will this plan succeed? If so, complete the partial plan, resolving any
threats that arise. If not, complete the partial plan to the point where
planning fails, and explain the source of the failure.
4. GraphPlan (20 pts.)
Formulate the same problem as in #2 and #3 propositionally. Draw the planning
graph through the third situation level (i.e., you should show levels S0,
A0, S1, A1, and S2). Show all mutex constraints at the level
A0 and level S1. Explain briefly how each mutex constraint is derived.
(You can mark the mutex constraints on the graph, and give a list of explanations
separately.) Do not show mutex constraints after the second situation level.
(Your graph will quickly get too complicated to read!) Do not show any
static predicates in the planning graph.
5. SATPlan (10 pts.)
(Russell & Norvig 11.17)
6. HTN Planning (10 pts.)
(Russell & Norvig 12.4)