UNSUPERVISED LEARNING
The best results for unsupervised learning are, currently, from my MK10 and
SNPR models of first language learning. An account of this work, and copies of
papers, may be found in Language Learning as
Compression.
SP70 (described below) is a relatively new model of unsupervised learning
that attempts to integrate learning with such things as parsing and production
of language, fuzzy pattern recognition and best-match information retrieval,
probabilistic and exact forms of reasoning, and others. As a model of learning,
it is not yet as successful as the earlier models.
Developing multi-level
grammars in a framework of information compression by multiple alignment,
unification and search
Cognition Research Technical Report, March 2005.
PDF.
This paper describes the SP71 model of grammatical inference designed to
overcome the weakness in SP70, that it is only able to
discover structure at two levels of abstraction, words and sentences. By
contrast, SP71 is able to discover intermediate levels of structure such as
phrases and clauses. More work is needed to iron out some anomalies in the
model.
Unsupervised learning in a framework of
information compression by multiple alignment, unification and search
School of Informatics Report, November 2001, University of Wales Bangor. PDF,
Postscript,
uk.arxiv.org/abs/cs.AI/0302015.
Describes SP70 (version 9.2), a development of the SP framework
that incorporates unsupervised learning of grammar-like structures from
language-like input. A short version of this paper was presented at the
Workshop and Tutorial on Learning
Context-Free Grammars at ECML/PKDD2003.
|