Ergonomic man-machine interface incorporating adaptive pattern recognition based control system

An adaptive interface for a programmable system, for predicting a desired user function, based on user history, as well as machine internal status and context. The apparatus receives an input from the user and other data. A predicted input is presented for confirmation by the user, and the predictive mechanism is updated based on this feedback. Also provided is a pattern recognition system for a multimedia device, wherein a user input is matched to a video stream on a conceptual basis, allowing inexact programming of a multimedia device. The system analyzes a data stream for correspondence with a data pattern for processing and storage. The data stream is subjected to adaptive pattern recognition to extract features of interest to provide a highly compressed representation which may be efficiently processed to determine correspondence. Applications of the interface and system include a VCR, medical device, vehicle control system, audio device, environmental control system, securities trading terminal, and smart house. The system optionally includes an actuator for effecting the environment of operation, allowing closed-loop feedback operation and automated learning.

Skip to:  ·  Claims  ·  References Cited  · Patent History  ·  Patent History

Claims

1. A method for classifying image data comprising the steps of:

providing a plurality of object-related models;
creating, from the image data, a plurality of accessible mapped ranges corresponding to different subsets of the image data;
assigning at least one identifier to corresponding ones of the mapped ranges, each of the identifiers specifying for the corresponding mapped range a procedure and a corresponding subset of the image data;
executing, for a plurality of the mapped ranges, a corresponding procedure upon a subset of the image data which corresponds to the mapped ranges;
selecting at least one of the mapped ranges corresponding to a portion of the image data;
representing the image data as a set of the identifiers of the selected mapped ranges; and
determining a class relations of the representation of the image data as a set of identfiers of the selected mapped ranges with at least one of said plurality of models based on an image-to-model correspondence.

2. The method according to claim 1, further comprising the steps of:

generating a plurality of addressable domains from the image data, each of the domains representing a portion of the image information;
subjecting a domain to one or more transforms selected from the group consisting of a null transformation, a predetemmed rotation, an inversion, a predetermined scaling, and a predetermined frequency domain preprocessing;
said selecting step corlprising selecting, for each of the transformed domains, at least one of the mapped ranges which most closely corresponds according to predetermined criteria; and
selecting, based on the determined class relationship a model which most closely corresponds to the set of identifiers representing the image information.

3. The method according to claim 2 wherein the step of selecting the most closely corresponding one of the mapped ranges comprises the step of selecting, for each transformed domain, the mapped range which is the most similar, by a method selected from at least one of the group consisting of selecting a minimum Hausdorff distance from the domain, selecting the highest cross-correlation with the domain and selecting the lowest mean square error of the difference between the mapped range and the domain.

4. The method according to claim 3 wherein the step of selecting the most closely corresponding one of mapped ranges includes the step of selecting, for each tranrformed domain, the mapped range wit the minimum modified Hausdorff distance calculated as D{db,mrb}+D{1-db,1-mrb}, where D is a distance calculated between a pair of sets of data each representative of an image, db is a domain, mrb is a mapped range, 1-db is the inverse of a domain, and 1-mrb is an inverse of a mapped range.

5. The method according to claim 2, wherein the image data comprises a plurality of pixels each having one of a plurality of associated color map values, frter comprising the steps of:

optionally transforming for each axis of the a the color map, values of the pixels of each doimain by a function including at least one scaling function each of which may be the same or different, and selected based, on a correspondeune between the domains and ranges to which they are to be matched; and
selecting, for each of the domains, at least one of the mapped ranges having color map pixel values corresponding to the color map pixel values of the domain according to apredetermined criteria, wherein the step of representing the image color map information includes the substep of representing the image color map information as a set of values each including an identifier of the selected mapped range and the color map trasform.

6. The method according to claim 2, wherein the image data includes a sequence of relatively delayed images representing at least one moving object, further comprising the steps of:

storing delayed image data;
generatiug a plurality of addressable further domains from the stored delayed image data, each of the further domains representing a portion of the delayed image information, and corresponding to a domain;
creating, from the stored delayed image data, a plurality of addressable mapped ranges corresponding to different subsets of the stored delayed image data;
matching the further domain and the domain by subjecting a further domain to one or more corresponding transforms selected from the group consisting of a null transform, a predetermined rotation, an inversion, a predetermined scaling, and a predetermined frequency domain preprocessing, which corresponds to the transforms applied to a corresponding domain, and one or more noncorresponding transforms selected from the group consisting of a predetermined rotation, an inversion, a predetermined scaling, a translation and a predetermined frequency domain preprocessing, which does not correspond to the transforms applied to a corresponding domain;
computing a motion vector between one of the domain and the further domain, or the set of identifiers representing the image data and the set of identifiers representing the delayed image data, and storing the motion vector;
compensating the further domain with the motion vector and computing a difference between the compensated Her domain and the domain;
selecting, for each of the delayed domains, at least one of the mapped ranges corresponding to a portion of the delayed image data;
representing a difference between the compensated further domain and the domain as a set of difference identifiers of a set of selected mapping ranges and an associated motion vector, and representing the further domain as a set of identifiers of the selected mapping ranges;
determining a complexity of the difference based on a density of representation; and
when the difference has a complexity below a predetermined threshold, selecting, from the plurality of models, a model which most closely corresponds to the set of identifiers of the image data and the set of identifiers of the delayed image data.

7. The method according to claim 1, wherein said representing step further comprises the steps of determining an object-related feature of interest of the image data, selecting at least one mapped range corresponding to the feature of interest, storing the identifiers of the selected mapped range, selecting a further mapped range corresponding to a portion of image data having a predetermined relationship to the object-related feature of interest and storing the identifiers of the further mapped range.

8. The method according to claim 1, wherein said image data comprises data representing three associated physical dimensions obtained by a method selected from the group consisting of synthesizing a three dimensional representation based on a machine based prediction derived from two dimensional image data, synthesizing a three dimensional representation derived from a time series of pixel images, and synthesizing a three dimensional representation based on a image data representing a plurality of parallax views having at least two dimensions.

9. An apparatus for automatically recognizing digital image data consisting of image information, comprising:

means for storing object-related template data;
means for storing the image data;
means for generating a plurality of addressable domains from the stored image data, the domains representing different portions of the image information;
means for creating, from the stored image data, a plurality of addressable mapped ranges corresponding to different subsets of the stored image data, the creating means including means for executing, for each of the mapped ranges, a procedure upon a subset of the stored image data which corresponds to the mapped range;
means for assigning identifiers to corresponding ones of the mapped ranges, each of the identifiers specifying for the corresponding mapped range an address of the corresponding subset of stored image data;
means for selecting, for each of the domnains, at least one of the mapped ranges which most closely corresponds according to predetermined criteria;
means for representing at least a portion of the image information as a set of the identifiers of the selected mapped ranges; and
means for selecting, from the stored templates, a template which most closely corresponds to the set of identifiers representing the portion of the image information.

10. A method of selectively processing an image having a dimensionality, comprising te steps of:

providing information relating to a plurality of exemplars, said information relating to each exemplar having a dimensionality differing from the dimensionality of the image and including infornation representing at least one additional dimension;
inputting an electronic representation of an image containing a representation of at least one physical object;
preprocessing the electronic representation of the image to distinguish a representation of at least one object in the image;
processing the distinguished object represented in the image to reduce a storage requirement of the distinguished representation of the object while retaining morphological information describing the distinguished representation of the object;
further processing the processed distinguished representation of the object in conjunction with a plurality of exemplars to produce a comparison; and
selectively producing an output signal based on said comparison.

11. The method according to claim 10, wherein said processing step comprises modeling the distinguished representation of the object in at least three independent dimensions.

12. The method according to claim 10, wherein said plurality of exemplars comprise models each having at least three dimensions, said comparing step comprising projecting said exemplar into two dimensions.

13. The method according to claim 10, further comprising the steps of:

inputting a search criteria relating to image morphology;
comparing the processed distinguished representation of the object to the search criteria; and
selectively processing the image based on said output signal and said comparison of said search criteria.

14. The method according to claim 13, further comprising the steps of:

receiving a user input relating to said search criteria;
predicting a most probable intended search criteria based on said input;
presenting to the user with a predicted search criteria based on said predicted intended search criteria; and
receiving feedback from the user to determine an agreement with said predicted search criteria.

15. The method according to claim 14, wherein said predicting step is adaptive.

16. The method according to claim 13, wherein said search criteria comprises an input image pattern.

17. The method according to claim 13, wherein said search criteria comprises an identifier of an object or an image.

18. An apparatus for selectively processing a received electronic representation of an image, having dimensionality and containing representation of at least one physical object, comprising:

means for storing information relating to a plurality of exemplars, said information relating to each exemplar having a dimensionality differing from the dimensionality of the image and including information representing at least one additional dimension;
means for distinguishing a representation of at least one object in the electronic representation of the image;
means for reducing a storage requirement of the distinguished representation of the object in the image while retaining morphological information describing the distinguished object; and
means for producing a comparison based on the processed distinguished representation of the object to information relating to a plurality of said exemplars, by processing said information relating to said exemplars in conjunction with said distinguished representation of the object, producing an output signal relating to said comparison.

19. The apparatus according to claim 18, wherein said means for distinguishing comprises means for determining motion planes from a sequence of representations of the image.

20. The apparatus according to claim 18, wherein said means for reducing generates a set of identifiers of an iterated function system describing the object morphology.

21. The apparatus according to claim 18, wherein said received electronic representation of an image comprises a video signal, further comprising means for selectively recording the video signal based on said comparison.

Referenced Cited
U.S. Patent Documents
5046113 September 3, 1991 Hoki
5065447 November 12, 1991 Barnsley et al.
5067161 November 19, 1991 Mikami et al.
5067166 November 19, 1991 Ito
5128525 July 7, 1992 Stearns et al.
5151789 September 29, 1992 Young
5214504 May 25, 1993 Toriu
5280530 January 18, 1994 Trew et al.
5303313 April 12, 1994 Mark et al.
5347600 September 13, 1994 Barnsley et al.
5384867 January 24, 1995 Barnsley et al.
5430812 July 4, 1995 Barnsley et al.
5495537 February 27, 1996 Bedrosian et al.
5526479 June 11, 1996 Barstow et al.
5546518 August 13, 1996 Blossom et al.
Patent History
Patent number: 5901246
Type: Grant
Filed: Jun 6, 1995
Date of Patent: May 4, 1999
Inventors: Steven M. Hoffberg (Yonkers, NY), Linda I. Hoffberg-Borghesani (Acton, MA)
Primary Examiner: Joseph Mancuso
Assistant Examiner: Jayanti K. Patel
Attorney: Steven M. Hoffberg
Application Number: 8/469,104
Classifications
Current U.S. Class: Template Matching (e.g., Specific Devices That Determine The Best Match) (382/209)
International Classification: G06L 962;