Object Recognition Using Textons and Shape Filters

- Microsoft

Given an image of structured and/or unstructured objects, semantically meaningful areas are automatically partitioned from the image, each area labeled with a specific object class. Shape filters are used to enable capturing of some or all of the shape, texture, and/or appearance context information. A shape filter comprises one or more regions of arbitrary shape, size, and/or position within a bounding area of an image, paired with a specified texton. A texton comprises information describing the texture of a patch of surface of an object. In a training process a sub-set of possible shape filters is selected and incorporated into a conditional random field model of object classes. The conditional random field model is then used for object detection and recognition.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATION

This application is a continuation of U.S. patent application Ser. No. 11/534,019, filed on Sep. 21, 2006, which is incorporated by reference herein.

TECHNICAL FIELD

This description relates generally to image processing and more specifically to object detection and recognition.

BACKGROUND

Object detection and recognition are difficult problems in the field of computer vision. Object detection involves determining the presence of one or more objects in an image of a scene. Image segmentation comprises identifying all image elements that are part of the same object in an image. Object recognition comprises assigning semantic labels to the detected objects. For example, to determine a class of objects that the object belongs to such as cars, people or buildings. Object recognition can also comprise assigning class instance labels to detected objects. For example, determining that a detected object is of a particular type of car. Object recognition may also be referred to as semantic segmentation.

There is a need to provide simple, accurate, fast and computationally inexpensive methods of object detection and recognition for many applications. For example, given a photograph or other image it is required to automatically partition it into semantically meaningful areas each labeled with a specific object class. In addition, there is a need to cope with both structured object classes such as people, cars, buildings and unstructured object classes such as sky, grass or water. When dealing with large collections of images and/or large numbers of object classes, efficiency of computation is a particular requirement.

SUMMARY

The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.

Given an image of structured and/or unstructured objects we automatically partition that image into semantically meaningful areas each labeled with a specific object class. We use a novel type of feature which we refer to as a shape filter. Shape filters enable us to capture some or all of shape, texture and appearance context information. A shape filter comprises one or more regions of arbitrary shape, size and position within a bounding area of an image, paired with a specified texton. A texton comprises information describing the texture of a patch of surface of an object. In a training process we select a sub-set of possible shape filters and incorporate those into a conditional random field model of object classes. That model is then used for object detection and recognition.

Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.

DESCRIPTION OF THE DRAWINGS

The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:

FIG. 1 is a high level schematic diagram of an object detection and recognition system showing input and output;

FIG. 2a is an example input image;

FIG. 2b is an example of a texton map formed from the input image of FIG. 2a;

FIG. 2c is an example of a feature pair;

FIG. 2d shows part of the texton map of FIG. 2b with three superimposed rectangles;

FIG. 3 is a flow diagram of a method of object detection and recognition;

FIG. 4 is a flow diagram of a method during a training phase of an object detection and recognition system;

FIG. 5 is a flow diagram of the method of FIG. 3 with a cross-validation process;

FIG. 6 is a schematic diagram of the object detection and recognition system of FIG. 1 in more detail.

Like reference numerals are used to designate like parts in the accompanying drawings.

DETAILED DESCRIPTION

The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.

Although the present examples are described and illustrated herein as being implemented in an object detection and recognition system, the system described is provided as an example and not a limitation. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of image processing systems. For example, it is not essential to use photographs or still digital images; video images or other types of images may be used such as medical images, infra-red images, x-ray images or any other suitable type of image.

The term “image element” is used to refer to a pixel, or a plurality of pixels such as a cluster of pixels or block of neighboring pixels.

FIG. 1 is a high level schematic diagram of an object detection and recognition system 12 which is provided using any suitable type of processor such as a computer having software as known in the art suitable for enabling software for implementing the invention to be executed. The object detection and recognition system comprises an input arranged to receive images 10 within which it is required to detect and recognize objects. The input may be of any suitable type such as an image capture device of any suitable type such as a camera, medical imaging device, video camera or the like. The input may also be any suitable means for receiving a previously obtained image such as a communications link, disk drive, USB connection or other suitable input means.

The input image is of one or more objects to be detected and recognized. No specific requirements for lighting conditions, camera viewpoint, scene geometry, object pose or articulation are required for the input image or for training images used during a training phase discussed in more detail below. In addition, the image may be of structured and/or unstructured objects. For example, sky is an example of an unstructured object and a person or car is an example of a structured object.

The object detection and recognition system 12 also has an output arranged to provide an object label map 14 associated with an input image. The output is of any suitable type such as a graphical user interface, a file or any other suitable output. The object label map comprises, for each image element, a label indicating an object class to which that image element belongs. For example, the image element may be a pixel or group of pixels and the object class may be for example, building, grass, tree, cow, sheep, sky, aeroplane, water, face, car, bike, flower, sign, bird, book, chair, road, cat, dog, body, boat. Any suitable number of object classes may be used and in one example discussed below, 21 object classes are used.

We devise a novel type of feature for use in object detection and recognition and we refer to this type of feature herein as a shape filter. Shape filters enable us to capture some or all of shape, texture and appearance context information. An example of such context information might be that “cow” pixels tend to be surrounded by “grass” pixels. Another example of such context information might be that if a feature associated with the neck of a cow is observed on the left hand side, then features associated with cow legs are expected on the right hand side.

A shape filter comprises one or more regions of arbitrary shape, size and position within a bounding area of an image, used together with a specified texton (see below for an explanation of textons). In one embodiment of the invention the regions are rectangles whose four corners are chosen at random (such that a square is an example of a rectangle) and the bounding area is a box covering about half the image area. However, this is not essential, the regions may be of any suitable shape and the bounding area may be of other sizes relative to the image. For example, the bounding area may be movable within the image and have an area about ¼ to ¾ of the image area. FIG. 7 shows a bounding area 70 the centre of which is marked by a cross and with random rectangular regions 71 (also referred to herein as masks) shown in each bounding area. A plurality of such rectangular regions are chosen using a pseudo-random process during formation of a shape filter as described in more detail below.

Each shape filter has an associated texton. For example, for a given set of regions (e.g. rectangle masks) in a bounding area and a texton we compute feature responses. Textons are known in the art and are described in detail in for example, Leung, T. and Malik, J. “Representing and recognizing the visual appearance of materials using three-dimensional textons” IJCV 43 2001 29-44. A texton can be thought of as information describing the texture of a patch of surface of an object. The information can comprise geometric and photometric properties of the patch. For example, a texton may comprise information describing a patch of surface having a ridge, groove, spot, stripe or combination thereof. Given a relatively small texton vocabulary, this can be used to characterize images of any material as known in the art from the Leung and Malik paper mentioned above.

In one embodiment we learn a vocabulary or dictionary of textons from a set of training images that are also used during other training aspects of processes described herein. However, it is also possible to use a pre-specified set of textons formed from independent information.

Any suitable method for learning the dictionary or vocabulary of textons can be used. For example, by convolving a multi-dimensional filter bank with the set of training images and running a clustering algorithm on the filter responses. Any suitable filter bank can be used such as the 17-dimensional filter bank described in the work, “Categorization by learned universal visual dictionary” Winn, J., Criminisi, A., Minka, T. Int. Conf. on Computer Vision 2005. That filter bank comprises scaled Gaussians, x and y derivatives of Gaussians, and Laplacians of Gaussians. The Gaussians are applied to three color channels, while the remaining filters only to luminance. In an example, the perceptually uniform CIELab color space is used. However, any suitable color space can be used. Any suitable clustering algorithm can be used such as K-means clustering using Mahalanobis distance, Mean Shift or agglomerative clustering.

As mentioned above, for a given feature pair comprising a set of regions (e.g. rectangle masks) in a bounding area and a texton we compute feature responses. This is now described with reference to FIG. 2. FIG. 2a shows an example input image which in this case is a photograph of a cow and a calf standing on grass. A texton map is formed from the input image and FIG. 2b shows a texton map computed from FIG. 2a. The texton map, comprises, for each image element a label indicating which texton from the dictionary of textons most appropriately describes that image element. For example, each image element is assigned to its nearest cluster center (for example, using Mahalanobis distance). Please note that the images of FIG. 2 are presented here in grey scale. The different grey values in FIGS. 2b and 2d are intended to represent different texton indices.

FIG. 2c shows a particular feature pair comprising a bounding region (black rectangle) comprising a rectangle mask r (white rectangle inside bounding region) which is offset from the centre (indicated by cross) of the bounding region and a texton t which describes grass texture for example. A feature response can be thought of as the number of instances of a particular type of texton under the mask in a particular location. In the example of FIG. 2d a feature response for the feature pair of FIG. 2c is calculated at three positions i1, i2, i3 as illustrated. If A is the area of the mask rectangle r in units of image elements, then the feature response v(i1, r, t) is approximately equal to A because in the position i1 the mask is located fully over grass textons in the texton map. For position i2 the feature response is approximately zero because the mask is located fully over non-grass textons in the texton map. For position i3 the feature response is approximately A/2 because the mask is located over about half grass textons and half non-grass textons in the texton map. For this feature pair (rectangle mask r and grass texton t) our system learns that points such as i1 belonging to “cow” regions tend to produce large feature responses. This enables our system to make use of context information such as “cow pixels tend to be surrounded by grass pixels”. In this way feature pairs (also referred to as shape filters) enable us to model appearance-based context.

Following a similar argument as before, our system will learn that “eye pixels” tend to occur near “mouth” or “nose pixels”. Therefore, shape filters enable us to model local co-occurrences of object parts within an object region.

The way in which shape filters are used in an object detection and recognition system is now described with reference to FIG. 3. The method of FIG. 3 relates to a test phase; that is operation of the object detection and recognition system on a previously unseen image to produce an object label map. Training phases are also performed as described later with reference to FIGS. 4 and 5. In some embodiments the training and test phases are integral and not completely independent of one another so that it is not necessarily the case that training is always carried out first, independently of use of the system for processing new images.

An input image is received (box 30) such as the input image of FIG. 2a and a texton map is evaluated as already described with reference to FIG. 2b and using a dictionary of textons. A plurality of shape filters are then evaluated (by calculating feature responses as mentioned above) over the texton map (see box 32). For example, thousands of shape filters may be used. These shape filters are determined during a training phase described in more detail below during which a multi-class classifier is formed. During the process of learning the multi-class classifier, those shape filters which are useful for discriminating between object classes are learnt.

A belief distribution is then estimated for each image element's belief over a set of object classes which may be recognized (see box 33). For example, there are 21 possible object classes although the method may be extended to larger numbers of object classes. This is achieved using a conditional random field (CRF) model of object classes.

The CRF model is then used to agree an overall object labeling (see box 34) optionally taking into account input from a cross validation process (see below with reference to FIG. 5 box 37) and a process for learning parameters for color potentials in the CRF model during the test phase (see box 35). An inference process is applied to the CRF model to infer the overall object labeling, i.e. the object label map. Any suitable inference process can be used such as an alpha-expansion graph-cut algorithm as described in Boykov, Y., Jolly, M. P. “Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images.” Proc. of IEEE ICCV 2001 or a belief propagation algorithm.

Any suitable CRF model may be used and an example is described in detail below. The CRF model defines the conditional probability of object class labels given an image on the basis of a plurality of potentials comprising at least shape-texture potentials and optionally also one or more of color potentials, location potentials and edge potentials. The shape-texture potentials use the novel shape filters we mention above to represent shape, texture and appearance context of object classes. During a learning process, optionally comprising a boosting procedure, we select which shape filters best discriminate between object classes. This is described in more detail below.

In embodiments where color potentials are used in the CRF model, these are arranged to capture the color distribution of the instances of a class in a particular image. Color parameters are learnt separately for each image during the test phase (i.e. use of the system for object detection and recognition on previously unseen images). This is indicated by box 35 of FIG. 3 which feeds back to box 34 of FIG. 3.

As mentioned above, during a training phase a multi-class classifier is learnt. This is now described in more detail with reference to FIG. 4. As mentioned above a set of training images is provided. For example, about 250 training images may be used for about 20 object classes although larger numbers of object classes and training images may be used. For each training image a known object label map is provided in advance, for example, by manually labeling the training images. Pixels may be labeled as void in these known object label maps to cope with pixels that do not belong to an object class for example. Void image elements are ignored for training and testing. For each training image, a texton map is computed as described above.

The training object label map, texton map pairs are provided as input to a learning process of any suitable type which is arranged to identify shape filters which are useful or effective for classifying the image elements into object classes. The learning process effectively produces a multi-class classifier.

In one embodiment the multi-class classifier is learnt using an adapted version of the Joint Boosting algorithm described in “Sharing features: efficient boosting procedures for multiclass object detection.” Torralba, A., Murphy, K., Freeman, W. Proc. of IEEE CVPR 2004 762-769. This is described in more detail below. However, other methods of learning such a multi-class classifier may be used, such as using support vector machines or decision forests. Decision forests are known in the art and comprise a plurality of decision trees such as the randomized decision trees described in V. Lepetit, P. Lagger, and P. Fua, “Randomised trees for real-time keypoint recognition”, CVPR05, pages 775-781, 2005. Support vector machines are also known in the art as described in Vapnik, V. Statistical Learning Theory. Wiley-Interscience, New York, (1998).

Those shape filters selected during the process of learning the multi-class classifier are then used during a test phase (see box 42 of FIG. 4).

FIG. 5 is the same as FIG. 3 but shows the cross validation process 37. During the cross validation process the parameters of the CRF model are optimized in order to produce optimal results on the training examples.

More detail about a particular example of a CRF model of object classes is now given.

In this example, the use of a Conditional Random Field allows us to incorporate shape, texture, color, location and edge cues in a single unified model. We define the conditional probability of the class labels c given an image x as

log P ( c x , θ ) = i ψ i ( c i x ; θ ψ ) shape - texture + π ( c i x i ; θ π ) color + λ ( c i , i ; θ λ ) location + ( i , j ) ɛ φ ( c i c j , g ij ( x ) ; θ φ ) edge - log Z ( θ , x ) ( 1 )

where ∈ is the set of edges in the 4-connected grid, Z(θ,x) is the partition function, θ={θψπλφ} are the model parameters, and i and j index nodes in the grid (corresponding to positions in the image).

Shape-texture potentials. The shape-texture potentials ψ use features selected by boosting to represent shape, texture and appearance context of the object classes. These features and the boosting procedure used to perform feature selection while training a multi-class logistic classifier are described below. We use this classifier directly as a potential in the CRF, so that


ψi(ci,x;θψ)=log {tilde over (P)}i(ci|x)  (2)

where {tilde over (P)}i(ci|x) is the normalized distribution given by the classifier using learned parameters θψ.

Edge potentials. Pair-wise edge potentials φ have the form of a contrast sensitive Potts model as described in Boykov, Y., Jolly, M. P.: “Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images” Proc of IEEE ICCV (2001),


φ(ci,cj,gij(x);θφ)=−θφTgij(x)δ(ci≠cj  (3)

In this example, we set the edge feature gij to measure the difference in color between neighboring pixels, gij=[exp(−β∥xi−xj2),1]T where xi and xj are three-dimensional vectors representing the color of the ith and jth pixels. Including the unit element allows a bias to be learned, to remove small, isolated regions. The quantity β is set (separately for each image) to (2∥xi−xj2)−1, where · averages over the image.

Color potentials capture the color distribution of the instances of a class in a particular image. This choice is motivated by the fact that, whilst the distribution of color across an entire class of objects is broad, the color distribution across one or a few instances of the class is typically compact. Hence the parameters θπ are learned separately for each image (and so this learning step needs to be carried out at test time). This aspect of the model captures the more precise image-specific appearance that a solely class-specific recognition system cannot.

Color models are represented as mixtures of Gaussians (GMM) in color space where the mixture coefficients depend on the class label. The conditional probability of the color of a pixel x is given by

P ( x c ) = k P ( k c ) N ( x x _ k , k ) ( 4 )

where k is a random variable representing the component the pixel is assigned to, and xk and Σk are the mixture mean and variance respectively. Notice that the mixture components are shared between different classes and only the coefficients depend on the class label, making the model much more efficient to learn than a separate GMM for each class. For a particular pixel xi we compute a fixed soft assignment to the mixture components P(k|xi). However, hard assignments may also be used. Given this assignment, we choose our color potential to have the form

π ( c i x i ; θ π ) = log k θ π ( c i , k ) P ( k x i ) ( 5 )

where parameters θπ act as a probability lookup-table.

Location potentials capture the weak dependence of the class label on the absolute location of the pixel in the image. The potential takes the form of a look-up table with an entry for each class and pixel location,


λi(ci,î;θλ)=log θλ(ci).  (6)

The index î is the normalized version of the pixel index i, where the normalization allows for images of different sizes; e.g. if the image is mapped onto a canonical square then î indicates the pixel position within this canonical square.

Learning the CRF Parameters

In one example, we use a method based on piecewise training. Piecewise training involves dividing the CRF model into pieces, each of which is trained independently. Piecewise training is known in the art. For example, as described in Sutton, C., McCallum, A.: “Piecewise training of undirected models”, In: 21st conference on Uncertainty in Artificial Intelligence, (2005). This training method minimizes an upper bound on the log partition function. However, this bound is generally an extremely loose one and performing parameter training in this way led to problems with over counting during inference in the combined model. We therefore modified piecewise training to incorporate fixed powers in order to compensate for over counting.

In an example, each of the potential types is trained separately to produce a normalized model. For the shape-texture potentials, we simply use the parameters learned during boosting. For the location potentials, we train the parameters by maximizing the likelihood of the normalized model containing just that potential and raising the result to a fixed power to compensate for over counting. Hence, the location parameters are learned using

θ λ ( c i , i ^ ) = ( N c , i + α λ N i + α λ ) ω λ ( 7 )

where Nc,î is the number of pixels of class cat normalized location î in the training set, Nî is the total number of pixels at location î and αλ is a small integer (we use αλ=1) corresponding to a weak Dirichlet prior on θλ.

At test time the color parameters are learned for each image in a piecewise fashion using Iterative Conditional Modes. First a class labeling c* is inferred and then the color parameters are updated using

θ π ( c i , k ) = ( i δ ( c i = c i ) P ( k x i ) + α π i P ( k x i ) + α π ) ω π . ( 8 )

Given this new parameter setting, a new class labeling is inferred and this procedure is iterated. The Dirichlet prior parameter π was set to 0.1, and the power parameter is wπ. In one example, wπ=3, fifteen color components and two iterations of this procedure gave good results. However, other power parameter values, numbers of color components and numbers of iterations can be used. Because we are training in pieces, the color parameters do not need to be learned for the training set.

The edge potential parameters θφ may be learnt by maximum likelihood, specified manually, or determined in any other suitable way.

More detail about an example inference process (box 3b, FIG. 3) is now given.

Given a set of parameters learned for the CRF model, we wish to find the most probable labeling c*; i.e. the labeling that maximizes the conditional probability (1). In one example, the optimal labeling is found by applying the alpha-expansion graph-cut algorithm mentioned above (note that our energy is regular). In this example the initial configuration is given by the mode of the unary potentials.

More detail about textons and shape filters in a particular example are now given.

Textons. Efficiency demands compact representations for the range of different appearances of an object. For this we utilize textons. In an example, a dictionary of textons is learned by convolving a 17-dimensional filter bank with all the training images and running K-means clustering (using Mahalanobis distance) on the filter responses. Finally, each pixel in each image is assigned to the nearest cluster center, thus providing a texton map.

In an example, shape filters consist of a set of NR rectangular regions whose four corners are chosen at random within a fixed bounding box covering about half the image area. For a particular texton t, the feature response at location i is the count of instances of that texton under the offset rectangle mask. These filter responses can be efficiently computed over a whole image with integral images as described in Viola, P., Jones, M.: “Rapid object detection using a boosted cascade of simple features”, in: CVPRO1, (2001), 1:511-518 (K for each image, where K is the number of textons).

Joint Boosting for unary classification. In an example, a multi-class classifier is learned using an adapted version of the Joint Boosting algorithm mentioned above. The algorithm iteratively builds a strong classifier as a sum of ‘weak classifiers’, simultaneously selecting discriminative features. Each weak classifier is a decision stump based on a thresholded feature response, and is shared between a set of classes, allowing a single feature to help classify several classes at once. The sharing of features between classes allows for classification with cost sub-linear in the number of classes, and also leads to improved generalization.

The learned ‘strong’ classifier is an additive model of the form H(ci)=Σm=1Mhm(ci), summing the classification confidence of M weak classifiers. This confidence value can be reinterpreted as a probability distribution over ci using the softmax transformation

P ~ i ( c i x ) = exp ( H ( c i ) ) c i exp ( H ( c i ) ) .

Each weak-learner is a decision stump of the form

h ( c i ) = { a δ ( v ( i , r , t ) > θ ) + b if c i N k c i otherwise ( 9 )

with parameters (a,b,{kc}c∉N,θ,N,r,t) and where δ(·) is a 0-1 indicator function. The r and t indices together specify the shape filter feature (rectangle mask and texton respectively), with v(i,r,t) representing the corresponding feature response at position i. For those classes that share this feature (ci∈N), the weak learner gives h(ci)∈{a+b,b} depending on the comparison of v(i,r,t) to a threshold θ. For each class not sharing the feature (ci∈N) there is a constant kci that ensures asymmetrical sets of positive and negative training examples do not adversely affect the learning procedure.

The boosting algorithm iteratively minimizes an error function which unfortunately requires an expensive brute-force search over the sharing set N, the features (r and t), and the thresholds θ. Given these parameters, a closed form solution exists for a, b and {kc}c∉N. The set of all possible sharing sets is exponentially large, and so we employ a quadratic-cost greedy approximation as described in Torralba, A., Murphy, K., Freeman, W.: “Sharing Features: Efficient boosting procedures for multiclass object detection”, Proc of IEEE CVPR (2004) 762-769. To speed up the minimization over features we employ the random feature selection procedure described below. Optimization over θ∈Θ for a discrete set Θ can be made efficient by use of histograms of feature responses.

Sub-sampling and random feature selection for training efficiency. The considerable memory and processing requirements make training on a per-pixel basis impractical. Computational expense may be reduced by calculating filter responses on a Δ×Δ grid (for example either 3×3 for the smaller databases or 5×5 for the largest database). The shape filter responses themselves are calculated at full resolution to enable per-pixel accurate classification at test time.

Even with sub-sampling, exhaustive searching over all features (pairs of rectangle and texton) at each round of boosting is prohibitive. However, our algorithm examines only a fraction τ<<1 of features, randomly chosen at each round. All our results use τ=0.003 so that, over several thousand rounds, there is high probability of testing all features at least once.

We found that using random feature selection improves the training time by several orders of magnitude whilst having only a small impact on the training error.

FIG. 6 is a schematic diagram of an object detection and recognition system 12. It comprises an input arranged to receive images 10 and an output arranged to provide object label maps 14. The system is provided using any suitable processor 63 such as a computer as mentioned above with reference to FIG. 1. A classifier 60, which is a multi-class classifier is formed during a training process and a conditional random field model 61 is provided as described above. An inference mechanism 62 is used to infer an object label map using the conditional random field model. A texton map producer 64 forms texton maps by accessing a dictionary of textons which may be stored at a memory either integral with the system 12 or in communication with the system. A shape filter evaluator 65 applies shape filter and texton pairs to texton maps as described above and a cross validator 66 provides cross validation processes as described with reference to FIG. 5.

In an example an object detection and recognition system is provided comprising:

    • an input arranged to receive a plurality of training images of objects;
    • an input arranged to receive an object label map for each training image, each object label map comprising a label for each image element specifying one of a plurality of object classes;
    • means for accessing a dictionary of textons, each texton comprising information describing the texture of a patch of surface of an object;
    • a texton map producer arranged to form a texton map for each training image using the dictionary of textons, each texton map comprising, for each image element a label indicating a texton;
    • a shape filter evaluator arranged, for each texton map to compute a plurality of feature responses by applying a different shape filter for each feature response;
    • a processor arranged to select a sub-set of the shape filter, texton pairs used in computing the feature responses by forming a multi-class classifier to classify image elements into the object classes on the basis of at least some of the feature responses; and
    • wherein the processor is also arranged to form an object label map for a previously unseen image using the selected shape filter, texton pairs.

The term ‘computer’ is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, mobile telephones, personal digital assistants and many other devices.

The methods described herein may be performed by software in machine readable form on a storage medium. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.

This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.

Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.

Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.

It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. It will further be understood that reference to ‘an’ item refer to one or more of those items.

The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate.

It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention.

Claims

1. A computer-implemented method comprising:

performed by one or more processors executing computer-readable instructions, receiving a plurality of training images of objects; receiving an object label map for each training image, each object label map comprising a label for each image element specifying one of a plurality of object classes; accessing a dictionary of textons, each texton comprising information describing the texture of a patch of surface of an object; forming a texton map for each training image based at least in part on the dictionary of textons, each texton map comprising a label indicating a texton for each image element; forming a shape filter by pairing a bounding area of each training image with a specified texton; for each texton map computing a plurality of feature responses by applying a different shape filter for each feature response; selecting a sub-set of the shape filters used in computing the feature responses by forming a multi-class classifier to classify image elements into the object classes based at least in part on at least one of the feature responses; and forming an object detection and recognition system based at least in part on the selected shape filters.

2. A computer-implemented method as claimed in claim 1, wherein each shape filter comprises a bounding area defining an area of an image within which the shape filter is applied, the bounding area being movable within the image.

3. A computer-implemented method as claimed in claim 1, wherein each shape filter comprises a bounding area defining an area of an image within which the shape filter is applied and a plurality of substantially randomly sized and positioned rectangular regions within the bounding area.

4. A computer-implemented method as claimed in claim 1, wherein accessing the dictionary of textons comprises forming the dictionary of textons based at least in part on the training images.

5. A computer-implemented method as claimed in claim 1, wherein the multi-class classifier is formed based at least in part on a joint boosting process.

6. A computer-implemented method as claimed in claim 5, wherein the joint boosting process comprises iteratively building the multi-class classifier as a sum of decision stumps comprising thresholds applied to the feature responses, each decision stump being shared between a plurality of object classes.

7. A computer-implemented method as claimed in claim 1, wherein forming the object detection and recognition system comprises forming a conditional random field model of object classes, the model comprising definitions of a conditional probability of object class labels given an image based at least in part on a plurality of potentials comprising shape-texture potentials based at least in part on the shape filter, texton pairs.

8. A computer-implemented method as claimed in claim 7, further comprising learning parameters for the conditional random field model by dividing the conditional random field model into pieces and training each piece independently based at least in part on a training method incorporating fixed powers.

9. A computer-implemented method as claimed in claim 7, wherein the conditional random field model is formed based at least in part on color potentials arranged to represent a color distribution of an instance of an object class in a particular image.

10. A computer-implemented method as claimed in claim 7, wherein the conditional random field model is formed based at least in part on a location potential.

11. A computer-implemented method as claimed in claim 7, wherein the conditional random field model is formed based at least in part on an edge potential.

12. A computer-implemented method as claimed in claim 7, further comprising:

determining an overall object labeling for a previously unseen image based at least in part on the conditional random field model; and
inferring an object label map from the determined overall object labeling based at least in part on an inference process.

13. A computer-implemented method comprising:

performed by one or more processors executing computer-readable instructions, receiving a plurality of training images of objects; receiving an object label map for each training image, each object label map comprising a label for each image element specifying one of a plurality of object classes; accessing a dictionary of textons, each texton comprising information describing the texture of a patch of surface of an object; forming a texton map for each training image based at least in part on the dictionary of textons, each texton map comprising a label indicating a texton for each image element; forming a shape filter by pairing a bounding area of each training image with a specified texton; for each texton map computing a plurality of feature responses by applying a different shape filter for each feature response; selecting a sub-set of the shape filters used in computing the feature responses by forming a multi-class classifier to classify image elements into the object classes based at least in part on at least one of the feature responses; and forming an object label map for a previously unseen image based at least in part on the selected shape filters.

14. A computer-implemented method as claimed in claim 13, wherein forming the object label map for the previously unseen image comprises forming a conditional random field model comprising shape-texture potentials, edge potentials, color potentials, or location potentials.

15. A computer-implemented method as claimed in claim 14, further comprising determining parameters for the shape-texture potentials based at least in part on a joint boosting process with a substantially random selection of shape filters.

16. A computer-implemented method as claimed in claim 14, further comprising determining parameters for the color potentials based at least in part on an iterative conditional mode method.

17. One or more computer-readable storage media storing computer-executable instructions that, when executed on a processor, configure the processor to perform acts comprising:

receiving a plurality of training images of objects;
receiving an object label map for each training image, each object label map comprising a label for each image element specifying one of a plurality of object classes;
accessing a dictionary of textons, each texton comprising information describing the texture of a patch of surface of an object;
forming a texton map for each training image using the dictionary of textons, each texton map comprising a label indicating a texton for each image element;
forming a shape filter by pairing a bounding area of each training image with a specified texon;
for each texton map computing a plurality of feature responses by applying a different shape filter for each feature response;
selecting a sub-set of the shape filters used in computing the feature responses by forming a multi-class classifier to classify image elements into the object classes based at least in part on at least one of the feature responses; and
forming an object label map based at least in part on the selected shape filters.

18. The one or more computer-readable storage media of claim 17, further comprising applying the shape filters such that each shape filter comprises a bounding area defining an area of an image within which the shape filter is applied, the bounding area being movable within the image.

19. The one or more computer-readable storage media of claim 17, further comprising applying the shape filters such that each shape filter comprises a bounding area having an area of about ½ the image area.

20. The one or more computer-readable storage media of claim 17, further comprising applying the shape filters such that each shape filter comprises a bounding area defining an area of an image within which the shape filter is applied and a plurality of substantially randomly sized and positioned rectangular regions within the bounding area.

Patent History
Publication number: 20110064303
Type: Application
Filed: Nov 11, 2010
Publication Date: Mar 17, 2011
Applicant: Microsoft Corporation (Redmond, WA)
Inventors: John Winn (Cambridge), Carsten Rother (Cambridge), Antonio Criminisi (Cambridge), Jamie Shotton (Oxford)
Application Number: 12/944,130
Classifications
Current U.S. Class: Trainable Classifiers Or Pattern Recognizers (e.g., Adaline, Perceptron) (382/159)
International Classification: G06K 9/62 (20060101);