Context Vectors: A Step Toward a Grand Unified Representation
Stephen I. Gallant


Abstract

Context Vectors are fixed-length vector representations useful for document retrieval and word sense disambiguation. Context vectors were motivated by four goals:

  1. Capture ``similarity of use" among words (``car" is similar to ``auto", but not similar to ``hippopotamus").
  2. Quickly find constituent objects (eg., documents that contain specified words).
  3. Generate context vectors automatically from an unlabeled corpus.
  4. Use context vectors as input to standard learning algorithms.

Context Vectors lack, however, a natural way to represent syntax, discourse, or logic. Accommodating all these capabilities into a ``Grand Unified Representation" is, we maintain, a prerequisite for solving the most difficult problems in Artificial Intelligence, including natural language understanding.