Knowledge Representation

Knowledge representation (KR) refers to the general topic of how information can be appropriately encoded and utilized in computational models of cognition. It is a broad, rather catholic field with links to logic, computer science, cognitive and perceptual psychology, linguistics, and other parts of cognitive science. Some KR work aims for psychological or linguistic plausibility, but much is motivated more by engineering concerns, a tension that runs through the entire field of AI. KR work typically ignores purely philosophical issues, but related areas in philosophy include analyses of mental representation, deductive reasoning and the "language of thought," philosophy of language, and philosophical logic.

Typically, work in knowledge representation focuses either on the representational formalism or on the information to be encoded in it, sometimes called knowledge  engineering. Although many AI systems use ad-hoc representations tailored to a particular application, such as digital maps for robot navigation or graphlike story scripts for language comprehension, much KR work is motivated by the perceived need for a uniform representation, and the intuition that because human intelligence can rapidly draw appropriate conclusions, KR should seek conceptual frameworks in which these conclusions have short derivations. The philosophical integrity or elegance of this assumed framework is less important than its practical effectiveness; for example, Jerry Hobbs (1985) urges a principle of ontological promiscuity in KR.

The central topic in knowledge engineering is to identify an appropriate conceptual vocabulary; a related collection of formalized concepts is often called an ontology. For example, temporal or dynamic knowledge is often represented by describing actions as functions on states of the world, using axioms to give sufficient conditions for the success of the action, and then using logical reasoning to prove constructively that a state exists that satisfies a goal. The name of the final state then provides a "plan" of the actions necessary to achieve the goal, such as: drink(move(mouth,pickup(cup,start-state))). Though useful, this ontology has several stubborn difficulties, notably the FRAME PROBLEM, that is, how compactly to state what remains unchanged by an action. (For example, picking up something from a table obviously leaves the table in the same place and doesn't change the color of anything, but because the state has changed this needs to be made explicit; and some actions do have such side effects, so the possibility cannot be ruled out on logical grounds.) Many solutions to the frame problem have been proposed, but none are fully satisfactory. Other approaches divide the world into objects with spatial and temporal boundaries (Hayes 1985), or use transformations on state descriptions to model actions more directly. Several areas of knowledge engineering have received detailed attention, notably intuitive physical knowledge, often called qualitative physics (Davis 1990; Weld and de Kleer 1990).

KR formalisms need a precisely defined SYNTAX, a useful SEMANTICS, and a computationally tractable inference procedure. A wide variety have been studied. Typical features include a notation for describing concept hierarchies and mechanisms to maintain property inheritance; an ability to check for, and correct, propositional inconsistency in the light of new information ("truth-maintenance"; see Forbus and de Kleer 1993); and ways of expressing a "closed world" assumption, that is, that a representation contains all facts of a certain kind (so if one is omitted it can be assumed to be false).

Many notations are inspired by sentential logics, some by semantic networks, others, often called "frame-based," re-semble object-oriented programming languages. Most of these can be regarded as syntactic variations on subsets of first-order relational logic, sometimes extended to allow Bayesian probabilistic inference, fuzzy reasoning, and other ways to express partial or uncertain information or degrees of confidence. Many also use some form of default or nonmonotonic reasoning, allowing temporary assumptions to be cancelled by later or more detailed information. For example, if told that something is an elephant one can infer that it is a mammal, but this inference would be withdrawn given the further information that it is a toy elephant. More recently there has been considerable interest in diagrammatic representations that are supposed to represent by being directly similar to the subject being represented (Glasgow, Narayan, and Chandrasekharan 1995). These are sometimes claimed to be plausible models of mental IMAGERY, but Levesque and Brachman (1985) point out that a set of ground propositions together with a closed-world assumption has many of the functional properties of a mental image.

A central issue for KR formalisms is the tradeoff between expressive power and deductive complexity. At one extreme, propositional logic restricted to Horn clauses (disjunctions of atomic propositions with at most one negation) admits a very efficient decision procedure but cannot ex-press any generalizations; at another, full second-order logic is capable of expressing most of mathematics but has no complete inference procedure. Most KR formalisms adopt various compromises. Network and frame-based formalisms often gain deductive efficiency by sacrificing the ability to express arbitrary disjunctions. Description logics (Borgida et al. 1989) guarantee polynomial-time decision algorithms by using operators on concept descriptions instead of quantifiers.

Commercial applications use knowledge representation as an extension of database technology, where the "knowledge" is seen as a reservoir of useful information rather than as supporting a model of cognitive activity. Here action planning is often unnecessary, and the ontology fixed by the particular application -- for example, medical diagnosis or case law -- but issues of scale become important. The contents of such systems can often be thought of either as sentential knowledge or as program code; one school of thought in KR regards this distinction as essentially meaningless in any case. More recently, increased available memory size has made it feasible to use "compute-intensive" representations that simply list all the particular facts rather than stating general rules. These allow the use of statistical techniques such as Markov simulation, but seem to abandon any claim to psychological plausibility.

Commercial use of knowledge bases and a proliferation of Krep systems with various ad-hoc syntactic restrictions has created a need for "standard" or "interchange" formalisms to allow intertranslation. These include the Knowledge Interchange Format, a blend of first-order set theory and LISP (Genesereth et al. 1992) and conceptual graphs, a graphical notation inspired by C. S. Peirce (Sowa 1997). There is also considerable interest in compiling standard ontologies for commonly used concepts such as temporal relations or industrial process control (see the "ontology page" for the current state of research in this area).

Although there has been a great deal of work on KR, much intuitive human knowledge still resists useful formalization. Even such apparently straightforward areas as temporal and spatial knowledge are still subjects of active research, and the knowledge involved in comprehending simple stories or understanding simple physical situations is still yet to be adequately formalized. Many ideas have been developed in NL research, such as "scripts" or idealized story-frameworks and the use of a limited number of conceptual primitive categories, but none have achieved unqualified success.

Early work in AI and cognitive science assumed that suitably represented information must be central in a proper account of cognition, but more recently this assumption has been questioned, notably by "connectionist" and "situated" theories. Connectionism seeks to connect cognitive be havior directly to neurally inspired mechanisms, whereas situated theories focus on how behavior emerges from interaction with the environment (Brooks 1991). Both were  initially seen as in direct opposition to knowledge-representation ideas, but reconciliations are emerging. In particular, phase-encoding seems to enable connectionist networks to perform quite sophisticated logical reasoning. The single most important lesson to emerge from these controversies is probably that the representation of knowledge cannot be completely isolated from its hypothesized functions in cognition.

See also

Additional links

-- Patrick Hayes


Borgida, A., R. J. Brachman, D. L. McGuinness, and A. L. Resnick. (1989). CLASSIC: A structural data model for objects. SIGMOD Record 18(2):58-67.

Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence 47(1-3): 139 - 160.

Davis, E. (1990). Representations of Common-Sense Knowledge. Stanford: Morgan Kaufmann.

Forbus, K., and J. de Kleer. (1993). Building Problem Solvers. Cambridge, MA: MIT Press.

Genesereth, M., and R. E. Fikes. (1992). Knowledge Interchange Format Reference Manual. Stanford University Logic Group, report 92-1, Stanford University.

Glasgow, J., N. H. Narayan, and B. Chandrasekharan, Eds. (1995). Diagrammatic Reasoning: Cognitive and Computational Perspectives. Cambridge, MA: AAAI/MIT Press.

Hayes, P. (1985). Naive Physics I: Ontology for liquids. In Hobbs and Moore (1985), pp. 71-107.

Hobbs, J. R. (1985). Ontological promiscuity. Proc. 23rd Annual Meeting of the Association for Computational Linguistics 61-69.

Levesque, H., and R. Brachman. (1985). A fundamental tradeoff in knowledge representation and reasoning. In Brachman and Levesque (1985).

Sowa, J. F. (1997). Knowledge Representation: Logical, Philosophical and Computational Foundations. Boston: PWS.

Weld, D., and J. de Kleer, Eds. (1990). Readings in Qualitative Reasoning about Physical Systems. San Francisco: Morgan Kaufmann.

Further Readings

Brachman, R., and H. Levesque. (1985). Readings in Knowledge Representation. Stanford: Morgan Kaufmann.

Hobbs, J., and R. Moore, Eds. (1985). Formal Theories of the Common-Sense World. Norwood, NJ: Ablex.

McCarthy, J., and V. Lifschitz, Eds. (1990). Formalizing Common Sense. Norwood, NJ: Ablex.

Russell, S., and P. Norvig. (1995). Artificial Intelligence; A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall .