Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers
In a method and a system for the implementation of multi-layered network object recognition in N-dimensional space, the structure of a neural recognition network is dynamically generated and adapted to recognize an object. The layers of the network are capable of recognizing key features of the input data by using evaluation rules to establish a hierarchical structure that can adapt to data position and orientation, varying data densities, geometrical scaling, and faulty or missing data.
The invention relates to a method and system for implementing successive multi-layered feature recognition in N-dimensional space in which recognition cells are dynamically generated to accommodate the input data and are adapted during the recognition process, wherein the recognition cells are structured into groups which have specific recognition features assigned to them.
Pattern and object recognition by means of successive computation steps generally begins with the loading of an input dataset into a pre-defined set of input variables or cells which constitute the lowest recognition layer. During the recognition process, each cell in higher recognition layers generates a response based on the values or responses of a selected subset of cells in lower layers (receptive field). The number of layers used, the sizes of the receptive fields, and the rule used by each cell to compute its response vary depending on the type of information to be recognized, that is, the complexity and number of the patterns, and the intermediate features that must be recognized to successfully identify the pattern. Sufficiently fine-grained intermediate features, overlapping receptive fields, and strongly converging data paths enable distortion-invariant and position-tolerant recognition.
The structure and dimension of the recognition layers are generally fixed during the recognition process, requiring that each layer contain enough recognition cells to fill the N dimensions of the recognition space in the required resolution. For cases where N>2 the resulting large number of cells makes a computation of the cell responses unfeasible. The process becomes inefficient particularly where the input data is sparsely distributed throughout a large input space.
It is the object of the present invention to provide a method by which the responses of the relevant cells in the higher recognition layers can be efficiently calculated without performing trivial calculations that contribute nothing to the solution.
SUMMARY OF THE INVENTIONIn a method and a system for the implementation of multi-layered network object recognition in N-dimensional space, the structure of a neural recognition network is dynamically generated and adapted to recognize an object. The layers of the network are capable of recognizing key features of the input data by using evaluation rules to establish a hierarchical structure that can adapt to data position and orientation, varying data densities, geometrical scaling, and faulty or missing data. Based on successive hierarchical feature recognition and synthesis, sufficient relevant recognition cells are generated to enable data processing and information propagation through the recognition network without generating or computing unnecessary irrelevant cells.
Before any data is processed, a network hierarchy is defined and constructed comprising a succession of recognition layers which each contain a collection of nodes or cells. The number of cells in a layer varies during processing, and each layer is initially equipped with zero cells. Each layer recognizes a specific group of key features, and the cells in the layer are used to represent the presence and characteristics of the features. The layers are interlinked by ownership-type relationships in which a cell in a given layer is said to own a group of cells in a subordinate layer, and this subordinate group of cells is said to constitute the receptive field of the superordinate cell. This owner-subordinate relationship is repeated between each pair of successive layers from the lowest input layer to the final uppermost recognition layer. This final layer represents the answer to the overall recognition problem in that its cells recognize the key features that uniquely classify the pattern imposed on the input layer.
All layers have the following properties:
-
- 1. Each layer represents an abstraction level of the object to be recognized, with the complexity of the abstraction being greater in the higher layers.
- 2. Each layer is equipped with a set of key features that characterize an aspect or property of the object to be recognized, with said features being computable by examining the properties and responses of a subset of cells in the subordinate layer.
- 3. Each layer is equipped with a rule or algorithm for determining whether a subordinate layer cell contributes positively or negatively to the recognition process carried out in said layer, specifically, a means of determining whether the inclusion of a given subordinate cell in the receptive field of a given superordinate cell is advantageous to the recognition function of the superordinate cell.
All cells have the further following properties:
-
- 1. A 1-dimensional response vector (fuzzy polarization vector) which indicates the cell's recognition of the features which are key to the layer containing the cell.
- 2. A collection of pointers representing references to a subset of cells in the layer subordinate to the layer containing said cell, collectively called receptive field, and a corresponding collection of weights.
- 3. A single pointer representing a reference to a cell in the layer superordinate to the layer containing said cell, called owning cell.
- 4. Variables containing computed geometric information such as unit normal vector, centroid, and orientation direction.
During the computation process, cells in the various hierarchical layers may be created and destroyed, linked and unlinked with owners, and assigned to, and removed from, receptive fields in an iterative convergent process that can implement neural network recognition techniques that result in a final collection of cells that adequately represents the object to be recognized in the various hierarchical layers. This final structure is both a hierarchical map of the input data as well as a network capable of recognizing the key features of the input data. Information flows between neighboring layers in both bottom-up and top-down directions during the convergence process: superordinate cells extract features by evaluating the properties of the subordinate cells in their receptive fields, and subordinate cells base their membership decisions, receptive field sizes, and evaluation parameters on information from the superordinate layer cells. Recognition occurs when the top-down signals and bottom-up signals are sufficiently mutually fortifying to establish a persistent stable activation level of all involved cells; at this point the iterative process has converged to a solution.
Overall solution convergence is driven by cells being grouped into receptive fields if they have converging interests, that is, if they represent the same thing, as can be seen from their recognition vectors, which must be converging with the owner.
The computation process is initiated by transferring the input data into the lowest hierarchical layer (the input or zeroth layer). Typical input data may consist of simply a list of coordinates and a physical property that has been measured at that coordinate. In this case, a single input cell may be generated for each input data point, fully representing the input data with zero information loss.
The next superordinate layer is constructed by means of an appropriate rule or algorithm for grouping input layer cells. In principle, the input layer cells are divided into a number of groups, a new layer 1 cell is generated for each group, the receptive field pointers of the layer 1 cell are set to point to the cells in the group, and the cells in the group become owned by the new layer 1 cell. This initial grouping into receptive fields is repeated as each new layer is constructed, and, although the iterative solution process converges despite ill-defined initial groupings, a well-planned initial grouping speeds convergence.
When the first two layers have been constructed, the iterative solution process may begin, and may continue throughout and after the successive construction of the third to last layers. The iterative process may implement whatever neural network recognition techniques are appropriate to the key features of the relevant layer, generally simple analytical formulae in the lower layers, and more complex pattern recognition algorithms in the higher layers, but always driving toward a stable state of mutual inter-layer reinforcement. In addition, during the iterative solution process, receptive field sizes and recognition parameters are adjusted, cells may reselect the receptive field to which they belong based on new information from higher layers, and cells may even find there is no existing receptive field they wish to join. Superordinate cells are created as required to overtake ownership of orphaned cells, and are destroyed when their receptive fields have atrophied.
The flexible dynamic structure has the following advantages:
-
- Cells are generated only where there is data to recognize
- Receptive field sizes are dynamic and membership criteria are based on rules, allowing receptive fields to adapt to varying data densities and geometric scaling of key features (the network is scale invariant)
- Since, through the process of mutual reinforcement, the response of superordinate cells is activated by the presence of merely sufficient supporting subordinate cells, not all conceivable supporting cells, the recognition process succeeds despite faulty, noisy. or missing data, or even despite errors in the recognition of a minority of cells in lower layers (the network is fault-tolerant)
- Since recognition cells are constructed where the data is found, a spatial translation of input data has no effect whatever on the ability of the network to recognize the assigned patterns (the network's recognition is position-invariant)
- Since the network is fault-tolerant at each recognition layer, the network is capable of compensating for each way in which the data may differ from the recognition ideal (the network's recognition is distortion-invariant).
A typical problem well suited for this network algorithm is 3-dimensional object recognition using as input data a set of 3-dimensional coordinate points representing random points on the surface of the object to be recognized. Such a dataset could be generated, for example by a laser scanning device capable of detecting and outputting the 3-dimensional coordinates of its scan points. Scan data generated in an industrial plant setting for the purposes of documentation, computation, and analysis has a usefulness that is directly related to the degree or complexity of recognition, or intelligence. For example, recognizing the mere presence of a surface is accomplished by a laser scan, but is not very useful; to recognize that surface as a part of a cylinder is better, but to recognize it as part of a pipe of a certain size with specific end connections allows the generation of useful CAE models of existing plants. In the current implementation, the input data consists of a 3-dimensional laser scan of a section of an industrial plant.
The current implementation uses 5 layers abstracted as follows (that is, what a cell from each layer represents):
Layer 0: 3-D Point
Layer 1: Nearly flat surface patch
Layer 2: Curved or flat surface fragment
Layer 3: Geometric primitive (cylinder, box, edge, sphere)
Layer 4: Final 3-d object
The input layer of the network is filled with input data such that one input layer cell is generated for each scanned laser point and assigned the spatial coordinates of the scan point. A cell in this layer can only represent a spatial point, therefore its polarization vector has only one component which always has a value of 1.
The next step constructs Layer 1 by finding an unowned layer 0 cell, generating a new layer 1 cell as its owner, and searching in the neighborhood of the previously unowned layer 0 cell for other unowned layer 0 cells that can be added to the receptive field of the new layer 1 cell. This process is repeated until there are no unowned layer 0 cells. A layer 1 cell in this implementation can only represent a small surface patch locally approximated as flat, and therefore also has a polarization vector that has only one component which always has a value of 1.
With layer 1 constructed, it is now possible to perform the first optimization. First, each layer 1 cell now has a group of layer 0 cells which it owns and which form its receptive field. From these cells, the layer 1 cell can evaluate itself and recognize the features it is assigned to recognize. Since the cells of layer 1 are intended to represent flat surface patches or panels, they can be appropriately represented by the coordinates of their centroid and by a 3-dimensional vector representing the surface normal at the centroid, whereby these values can be computed by simple analytical formulae applied to the layer 0 cells in the receptive field. Once the layer 1 cells have been evaluated, the rule can be applied which judges the contribution of a layer 0 cell to the recognition process of the layer 1 cell to which it belongs. Since a layer 1 cell represents a flat surface panel, a layer 0 cell contributes poorly or detrimentally to the recognition process of the layer 1 cell if it does not lie in or near the plane represented by the layer 1 cell. In this case, the layer 0 cell is removed from the receptive field of the layer 1 cell and allowed to search for an other more appropriate layer 1 cell to join. If that search is unsuccessful, the orphaned layer 0 cell initiates a new layer 1 cell to own it. During successive iterations, that newly generated receptive field, which at first contains a single layer 0 cell, may grow by the acquisition of other layer 0 cells that are rejected from nearby receptive fields. The final state of layers 0 and 1 is such that each layer 0 cell is owned by a layer 1 cell, and each layer 1 cell has a well-defined receptive field with members that support the decision of the recognition process carried out by the layer 1 cell. In addition, all input data has been processed without loss of information, in that each input data point has generated a layer 0 cell, and each layer 0 cell has been given the opportunity to represent the surface to which it belongs in the layer 1 surface panels.
Next the layer 2 cells are constructed by a similar process: for each unowned layer 1 cell, a new layer 2 cell is created and allowed to search for further receptive field members. Since the layer 2 cells can represent various types of surface elements, including curved surfaces, they have additional properties or variables to represent the features unique to this recognition layer (strength of curvature, that is, radius of curvature, and curvature orientation) and a two-component polarization vector to indicate the type of surface represented (flat or curved). These properties can be computed from the cells of the receptive field, and the layer 2 cells can be optimized by an iterative process as was done for the layer 1 cells. The conditions for membership in the receptive field of a layer 2 cell differ, however, from the conditions for membership in the receptive field of a layer 1 cell. Since layer 2 cells may represent curved surfaces, they must be more lenient in their selection of members, also allowing layer 1 cells to join that have their centroids somewhat outside the average plane represented by the layer 2 cell. On the other hand, layer 2 cells may impose new membership conditions on layer 1 cells, for example requiring that the receptive field of a layer 1 cell lie adjacent to the receptive field of another layer 1 cell that is already a member of the layer 2 cell's receptive field before allowing membership.
Top-down information flow may contribute to the recognition process in the following way: once a radius of curvature has been computed for a layer 2 cell, it becomes clear that that cell represents a curved surface segment that is likely to be part of a cylinder or sphere. For the purpose of recognizing that cylinder or sphere, it is not necessary and makes no sense to include information from points that are a distance of more that twice the computed radius of curvature away from the computed center. Thus it is possible for a layer 2 cell to restrict the size of the receptive fields of the layer 1 cells it owns to a certain value, and to allow the layer 1 cells to reevaluate themselves based on this new information.
Layer 3 cells are then generated and optimized analogous to layer 1 and 2 cells. Layer 3 cells are built up from the features of layer 2 cells, and represent cylindrical segments, edges, boxes, and whatever other geometric primitives are necessary to represent the objects found in the input data. Layer 3 cells again contain properties or variables not found in layer 2 cells to assess the key features detected, and a four-component polarization vector to specify the type of object the layer 3 cell represents.
The polarization vector of a cell indicates the degree to which the cell belongs to one of the recognition categories and serves to assess the compatibility between cells of differing layers. In the example of the layer 3 cells mentioned above, the polarization vector tends toward one of the following states:
[1 0 0 0]≡Plane
[0 1 0 0]≡Edge
[0 0 1 0]≡Cylinder
[0 0 0 1]≡Sphere
During the initial construction of the layer 3 cells, the polarization vector is set to [1 1 1 1] meaning that the cell is simultaneously a flat plane, an edge, a cylinder, and a sphere. As the cell acquires subordinate cells and reevaluates itself, the polarization vector is refined and asymptotically approaches one of the states listed above.
The polarization vector is used in conjunction with a compatibility matrix to assess the utility or feasibility of including a given layer 2 cell in the receptive field of a given layer 3 cell. This is carried out by multiplying the polarization vector of the layer 3 cell with the compatibility matrix and with the polarization vector of the layer 2 cell:
The resulting value indicates whether the layer 2 cell is a suitable receptive field cell for the layer 3 cell.
Finally, layer 4 cells represent the answer to the recognition problem in that they are abstractions of elements which are a more intelligent and more complete representation of the data than the original layer 0 data points.
Claims
1. A method for implementing object recognition in N-dimensional space by means of a network having multiple layers, said method comprising the following:
- The structure of the layers is hierarchical in that the layers are ordered and layers that are higher in the hierarchical order have cells connected by, links representing ownership of cells contained by layers that are lower in the hierarchical order;
- The layers are assigned certain key features which the cells in the respective layers are capable of recognizing and representing;
- The layers are dynamic in size and in structure in that member cells may be created or destroyed and links between cells of adjoining layers may be created or destroyed so as to adapt to input data to be recognized;
- The layers are equipped with a rule for determining whether cells from subordinate layers should be included in receptive fields of cells from higher layers;
- The layer cells are equipped with a polarization vector which serves to determine the compatibility of said cells with cells of neighboring layers;
- The network is adapted through an iterative process in which cell ownership is modified and cells are created or destroyed to converge the state of the cells to a final persistent stable state of mutual reinforcement which represents a solution to the recognition problem.
2. A method according to claim 1, wherein links representing ownership relationships between cells of differing layers are created not just between adjacent layers but also between non-adjacent layers to assist in the recognition process.
3. A method according to claim 1, wherein the arrangement of the layers is not linear, but is itself a branched network.
4. A method according to claim 1, wherein the implementation of various features is selectively distributed among multiple hardware or software processing systems for improved performance.
5. A system for implementing object recognition in N-dimensional space by means of a network having multiple layers, said system comprising the following;
- The structure of the layers is hierarchical in that the layers are ordered and layers that are higher in the hierarchical order have cells connected by links representing ownership of cells contained by layers that are lower in the hierarchical order;
- The layers are assigned certain key features which the cells in the respective layers are capable of recognizing and representing;
- The layers are dynamic in size and in structure in that member cells may be created or destroyed and links between cells of adjoining layers may be created or destroyed so as to adapt to input data to be recognized;
- The layers are equipped with a rule for determining whether cells from subordinate layers should be included in receptive fields of cells from higher layers;
- The layer cells are equipped with a polarization vector which serves to determine the compatibility of said cells with cells of neighboring layers;
- The network is adapted through an iterative process in which cell ownership is modified and cells are created or destroyed to converge the state of the cells to a final persistent stable state of mutual reinforcement which represents a solution to the recognition problem.
6. A system according to claim 5, wherein links representing ownership relationships between cells of differing layers are created not just between adjacent layers but also between non-adjacent layers to assist in the recognition process.
7. A system according to claim 5, wherein the arrangement of the layers is not linear, but is itself a branched network.
8. A system according to claim 5, wherein the implementation of various features is selectively distributed among multiple hard- or software processing systems for improved performance.
Type: Application
Filed: Mar 4, 2005
Publication Date: Sep 14, 2006
Inventor: Klaus Bach (Hockenheim)
Application Number: 11/072,880
International Classification: G06K 9/00 (20060101); G06K 9/68 (20060101);