A Bayesian Network for Temporal Segmentation

Abstract

A recurrent network which segments an unlabeled externally timed sequence of data is presented. The proposed method uses a Bayesian learning scheme earlier investigated, where the relaxation scheme is modified with a few extra parameters, a pairwise correlation threshold and a pairwise conditional probability threshold. %These can make units with low correlation or low conditioned %probability to inhibit instead of excite each other. The method studied is able to find start and end positions of words which are in an unlabeled continuous stream of characters. The robustness against noise during both learning and recall is studied.

The segmentation problem is fundamental in pattern recognition. Given data with a sequential/temporal behaviour this shows up as the temporal chunking problem which may be illustrated with the following sequence:

thisisacontinuousstreamofdatathatispossibletoreadwithoutseparators

Here, we want unfamiliar lists of familiar items (characters) presented sequentially to be recognized as new items (words). In the first place just the characters are familiar. When we have seen different lists several times we will also recognize the words as familiar items. The method presented here detects segmentation points between words. Conceptually this means that we have grouped a sequence of elementary items into a new, composite item.



keywords: recurrent neural network segmentation
A Bayesian Network for Temporal Segmentation (PDF) Artificial Neural Networks, pages 1081-1084, Amsterdam, September 4-7 1992. Elsevier.
Proc. ICANN'92, Brighton, United Kingdom.
Roland Orre
Last modified: Sun Feb 16 16:48:00 CET 2003