GENERATING SEARCH REQUESTS FROM MULTIMODAL QUERIES
A method and system for generating a search request from a multimodal query that includes a query image and query text is provided. The multimodal query system identifies images of a collection that are textually related to the query image based on similarity between words associated with each image and the query text. The multimodal query system then selects those images of the identified images that are visually related to the query image. The multimodal query system may formulate a search request based on keywords of web pages that contain the selected images and submit that search request to a search engine service.
Latest Microsoft Patents:
- Host Virtual Machine Domain Name System (DNS) Cache Enabling DNS Resolution During Network Connectivity Issues
- HOSTED FILE SYNC WITH STATELESS SYNC NODES
- COLLABORATIVE VIDEO MESSAGING COMPONENT
- METHOD AND SYSTEM FOR IMPLEMENTING SAFE DEPLOYMENT OF FEATURES
- COMPUTER-BASED POSTURE ASSESSMENT AND CORRECTION
This application is a continuation application of U.S. Pat. No. 8,081,824, filed on Nov. 30, 2004, and issued on Dec. 20, 2011, entitled, “GENERATING SEARCH REQUESTS FROM MULTIMODAL QUERIES: which is a divisional application of U.S. Pat. No. 7,457,825, filed on Sep. 21, 2005, and issued on Nov. 25, 2008, entitled “GENERATING SEARCH REQUESTS FROM MULTIMODAL QUERIES,” which are incorporated herein in their entireties by reference.
BACKGROUNDMany search engine services, such as Google and Overture, provide for searching for information that is accessible via the Internet. These search engine services allow users to search for display pages, such as web pages, that may be of interest to users. After a user submits a search request or query that includes search terms, the search engine service identifies web pages that may be related to those search terms. To quickly identify related web pages, the search engine services may maintain a mapping of keywords to web pages. This mapping may be generated by “crawling and indexing” the web (i.e., the World Wide Web) to identify the keywords of each web page. To crawl the web, a search engine service may use a list of root web pages to identify all web pages that are accessible through those root web pages. The keywords of any particular web page can be identified using various well-known information retrieval techniques, such as identifying the words of a headline, the words supplied in the metadata of the web page, the words that are highlighted, and so on. The search engine service then ranks the web pages of the search result based on the closeness of each match, web page popularity (e.g., Google's PageRank), and so on. The search engine service may also generate a relevance score to indicate how relevant the information of the web page may be to the search request. The search engine service then displays to the user links to those web pages in an order that is based on their rankings.
These search engine services may, however, not be particularly useful in certain situations. In particular, it can difficult to formulate a suitable search request that effectively describes the needed information. For example, if a person sees a flower on the side of a road and wants to learn the identity of the flower, the person when returning home may formulate the search request of “picture of yellow tulip-like flower in Europe” (e.g., yellow tulip) in hopes of seeing a picture of the flower. Unfortunately, the search result may identify so many web pages that it may be virtually impossible for the person to locate the correct picture assuming that the person can even accurately remember the details of the flower. If the person has a mobile device, such as a personal digital assistant (“PDA”) or cell phone, the person may be able to submit the search request while at the side of the road. Such mobile devices, however, have limited input and output capabilities, which make it both difficult to enter the search request and to view the search result.
If the person, however, is able to take a picture of the flower, the person may then be able to use a Content Based Information Retrieval (“CBIR”) system to find a similar looking picture. Although the detection of duplicate images can be achieved when the image database of the CBIR system happens to contain a duplicate image, the image database will not contain a duplicate of the picture of the flower at the side of the road. If a duplicate image is not in the database, it can be prohibitively expensive computationally, if even possible, to find a “matching” image. For example, if the image database contains an image of a field of yellow tulips and the picture contains only a single tulip, then the CBIR system may not recognize the images as matching.
SUMMARYA method and system for generating a search request from a multimodal query is provided. The multimodal query system inputs a multimodal query that includes a query image and query text. The multimodal query system provides a collection of images along with one or more words associated with each image. The multimodal query system identifies images of the collection that are textually related to the query image based on similarity between associated words and the query text. The multimodal query system then selects those images of the identified images that are visually related to the query image. The multimodal query system may formulate a search request based on keywords of the web pages that contain the selected images and submit that search request to a search engine service, a dictionary service, an encyclopedia service, or the like. Upon receiving the search result, the multimodal query system provides that search result as the search result for the multimodal query.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
A method and system for generating a search request from a multimodal query is provided. In one embodiment, the multimodal query system inputs a multimodal query that includes an image (i.e., query image) and verbal information (i.e., query text). For example, a multimodal query may include a picture of a flower along with the word “flower.” The verbal information may be input as text via a keyboard, audio via a speaker, and so on. The multimodal query system provides a collection of images along with one or more words associated with each image. For example, each image of the collection may have associated words that describe the subject of the image. In the case of an image of a yellow tulip, the associated words may include yellow, tulip, lily, flower, and so on. The multimodal query system identifies images of the collection whose associated words are related to the query text. The identifying of images based on relatedness to the query text helps to reduce the set of images that may be related to the query image. The multimodal query system then selects those images of the identified images that are visually related to the query image. For example, the multimodal query system may use a content base information retrieval (“CBIR”) system to determine which of the identified images are most visually similar to the query image. In one embodiment, the multimodal query system may return the selected images as the search result. For example, the multimodal query system may provide links to web pages that contain the selected images. In another embodiment, the multimodal query system may formulate a search request based on keywords of the web pages that contain the selected images and submit that search request to a search engine service, a dictionary service, an encyclopedia service, or the like. For example, the keywords of the web pages that contain the selected images may include the phrases yellow tulip, tulipa, Liliaceae lily flower, Holland yellow flower, and so on, and the formulated search request may be “yellow tulip lily flower Holland.” Upon receiving the search result, the multimodal query system provides that search result as the search result for the multimodal query. In this way, the multimodal query system allows the multimodal query to specify needed information more precisely than is specified by a unimodal query (e.g., query image alone or query text alone).
In one embodiment, the multimodal query system may generate from the collection of images a word-to-image index for use in identifying the images that are related to the query text. The word-to-image index maps images to their associated words. For example, the words tulip, flower, and yellow may map to the image of a field of yellow tulips. The multimodal query system may generate the collection of images from a collection of web pages that each contain one or more images. The multimodal query system may assign a unique image identifier to each image of a web page. The multimodal query system may then identify words associated with the image. For each associated word, the multimodal query system adds an entry that maps the word to the image identifier. The multimodal query system uses these entries when identifying images that are related to the query text. The multimodal query system may use conventional techniques to identify the images that are most textually related to the query text based on analysis of the associated words.
In one embodiment, the multimodal query system may generate from the collection of images an image-to-related-information index for use in selecting the identified images that are visually related to the query image. The image-to-related-information index may map each image to a visual feature vector of the image, a bitmap of the image, a web page that contains the image, and keywords of the web page that are associated with the image. For each image, the multimodal query system generates a visual feature vector of features (e.g., average RGB value) that represents the image. When determining whether an image of the collection is visually related to a query image, the multimodal query system generates a visual feature vector for the query image and compares it to the visual feature vector of the image-to-related-information index. The multimodal query system may identify, from the web page that contains an image, keywords associated with the image and store an indication of those keywords in the image-to-related-information index. The multimodal query system uses the keywords associated with the selected images to formulate a unimodal or text-based search request for the multimodal query.
In one embodiment, the multimodal query system may initially search the collection of images to determine whether there is a duplicate image. If a duplicate image is found, then the multimodal query system may use the keywords associated with that image (e.g., from the image-to-related-information index) to formulate a search request based on the multimodal query. If no duplicate image is found, then the multimodal query system uses the query text to identify images and then selects from those identified images that are textually and visually related to the query image as described above. The multimodal query system may generate a signature-to-image index for identifying duplicate images by comparing signatures of the images of the collection to the signature of a query image. The multimodal query system may use various hashing algorithms to map an image to a signature that has a relatively high likelihood of being unique to that image within the collection (i.e., no collisions). To identify duplicate images, the multimodal query system generates a signature for the query image and determines whether the signature-to-image index contains an entry with the same signature.
The computing devices on which the multimodal query system may be implemented may include a central processing unit, memory, input devices (e.g., keyboard and pointing devices), output devices (e.g., display devices), and storage devices (e.g., disk drives). The memory and storage devices are computer-readable media that may contain instructions that implement the multimodal query system. In addition, the data structures may be stored or transmitted via a data transmission medium, such as a signal on a communications link. Various communications links may be used to connect components of the system, such as the Internet, a local area network, a wide area network, a point-to-point dial-up connection, a cell phone network, and so on.
Embodiments of the multimodal query system may be implemented in various operating environments that include personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, digital cameras, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and so on. The devices may include cell phones, personal digital assistants, smart phones, personal computers, programmable consumer electronics, digital cameras, and so on.
The multimodal query system may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, and so on that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
where Iij is the average intensity for block ij and x and y represent the pixels of block ij. The system then performs a two-dimensional discrete cosine transform (“DCT”) on the matrix. The system discards the DC coefficient of the DCT matrix and selects 48 AC coefficients of the DCT matrix in a zigzag pattern as illustrated by pattern 405 resulting in an AC coefficients vector 406. The system then performs a principal component analysis (“PCA”) to generate a 32-dimension feature vector 407 as illustrated by the following equation:
Yn=PTAm (2)
where Yn represents the 32-dimension feature vector, Am represents the 48 AC coefficients, and P represents an m×n transform matrix whose columns are the n orthonormal eigenvectors corresponding to the first n largest eigenvalues of the covariance matrix ΣA
Di=wRGB(∥FRGBquery−FRGBj∥1)+wHSV(∥FHSVquery−FHSVj∥1)+wDaub(∥FDaubquery−FDaubj∥1), j=1, . . . , M (3)
where FRGBquery, FHSVquery, and FDaubquery are the feature vectors of the query image and FRGBj, Fj, and FDaubj are the feature vectors of the selected image, and is a normalization operator. In one embodiment, the component uses the constant weights of wRGB=0.3, wHSV=0.5, and wDaub=0.2. The component then loops to block 802 to select the next image. In block 808, the component selects the images with the smallest distances and returns the selected images.
where tf−idfi represents the score for word i, nid represents the number of occurrences of a word i on web page d, nd represents the total number of words on web page d, ni represents the number of pages that contains word i, and N represents the number of web pages in the collection of web pages. In blocks 1301-1307, the component loops calculating a score for each phrase of the document. In block 1301, the component selects the next keyword, which can contain a single word or multiple words. In decision block 1302, if all the keywords have already been selected, then the component returns the score for the keyword, else the component continues at block 1303. In block 1303, the component calculates a mutual information score of the selected keyword as represented by the following equation:
where MI(P) represents the mutual information score for keyword P, Occu(P) represents the count of occurrences of P on the web page, |P| represents the number of words P contains, N(|P|) represents the total number of keywords (i.e., phrases) with length less than |P|, prefix(P) represents the prefix of P with length |P|−1, and suffix(P) represents the suffix of P with length |P|−1. In decision block 1304, if the mutual information score is greater than a threshold, then the component continues at block 1305, else the component loops to block 1301 to select the next keyword. If the mutual information score does not meet a threshold level, then the component considers the keyword to be unimportant and sets its score to 0. In block 1305, the component calculates the TF-IDF score for the selected keyword as the average of the TF-IDF score for the words of the keyword. In block 1306, the component calculates a visualization style score (“VSS”) to factor in the visual characteristics of the keyword as represented by the following equation:
where VSS(P) represents the VSS score for the keyword P and tf−idfmax represents the maximum TF-IDF score of all keywords of the web page. The VSS is based on whether the keyword is in the title or in metadata and whether the keyword is in bold or in a large font. One skilled in the art will appreciate that other visual characteristics could be taken into consideration, such as position of a keyword on a page, closeness to an image, and so on. In block 1307, the component calculates a combined score for the selected keyword according to the following equation:
where X={tf−idf, Mi, VSS} and the coefficients b0, . . . , b3 are empirically determined. The component then loops to block 1301 to select the next keyword.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. For example, the multimodal query system may consider images to be duplicates when they are identical and when they are of the same content but from different points of view. An example of different points of view would be pictures of the same building from different angles or different distances. As used herein, the term “keyword” refers to a phrase of one or more words. For example, “yellow tulips” and “tulips” are both keywords. Accordingly, the invention is not limited except as by the appended claims.
From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
Claims
1. A method in a device for generating a search request for a multimodal query with a query image and query text, the query image being stored in electronic form, the method comprising:
- providing access to a collection of images and associated words;
- receiving a multimodal query that includes a query image and query text;
- identifying images of the collection based on textual relatedness between a word associated with an image and the query text;
- selecting images of the identified images based on visual relatedness between an identified image and the query image;
- generating a search request based on keywords associated with the selected images;
- submitting the generated search request to a search engine for identifying documents related to the multimodal query; and
- providing an indication of the identified documents as a search result for the multimodal query.
2. The method of claim 1 wherein the selecting comprises extracting a feature vector for the identified image, determining the distance between the extracted feature vector and the feature vector of each image of the collection, and selecting the images based on the determined distance.
3. The method of claim 1 wherein the collection includes a collection of web pages with images and words.
4. The method of claim 1 wherein visual relatedness is based on similarity in color space and wavelet coefficients.
5. The method of claim 1 including:
- before identifying images of the collection, determining whether the query image is a duplicate of an image of the collection; and when the query image is a duplicate of an image, generating a search request based on a keyword associated with that image.
6. The method of claim 5 wherein the query image is a duplicate when the images are identical.
7. The method of claim 5 wherein the query image is a duplicate when the images are of the same content but from different points of view.
8. The method of claim 5 wherein the collection includes signatures of the images and wherein images are duplicates when they have the same signature.
9. The method of claim 1 wherein the query text is derived from audio information.
10. A computer-readable storage device containing computer-executable instructions for controlling a computing device to find images related to a multimodal query, the instructions for performing a method comprising:
- providing access to web pages with images, the web pages having words;
- receiving a query image and query text of the multimodal query;
- identifying images of the web pages based on textual relatedness between words of a web page and the query text;
- selecting images of the identified images of the web pages based on visual relatedness between an identified image and the query image; and
- generating a search request based on keywords associated with the selected images.
11. The computer-readable storage device of claim 10 including:
- submitting the generated search request to a search engine for identifying documents related to the multimodal query; and
- providing an indication of the identified documents as a search result for the multimodal query.
12. The computer-readable storage device of claim 10 wherein the selecting comprises extracting a feature vector for the query image, determining the distance between the extracted feature vector and the feature vector and the feature vector of each image of the collection, and selecting the images based on the determined distance.
13. The computer-readable storage device of claim 10 including:
- before identifying web pages, determining whether the query image is a duplicate of an image of a web page; and when the query image is a duplicate of an image of a web page, generating a search request based on words of the web page that contains the duplicate image.
14. The computer-readable storage device of claim 10 wherein visual relatedness is based on similarity in color space and wavelet coefficients.
15. The computer-readable storage device of claim 13 wherein the query text is derived from audio information.
16. A computing device for generating a search request for a multimodal query with a query image and query text, comprising:
- a memory storing computer-executable instructions of: a component that identifies images of a collection of images based on textual relatedness between a word associated with an image and the query text; a component that selects images of the identified images based on visual relatedness between an identified image and the query image; and a component that generates a search request based on keywords associated with the selected images;
- a processor that executes the computer-executable instructions stored in the memory.
17. The computing device of claim 16 including:
- a component that submits the generated search request to a search engine for identifying documents related to the multimodal query; and
- a component that provides an indication of the identified documents as a search result for the multimodal query.
18. The computing device of claim 17 wherein the component that selects extracts a feature vector for the identified image, determines the distance between the extracted feature vector and the feature vector of each image of the collection, and selects the images based on the determined distance.
19. The computing device of claim 17 including a component that before identifying images of the collection, determines whether the query image is a duplicate of an image of the collection and when the query image is a duplicate of an image, generates a search request based on a keyword associated with that image.
20. The computing device of claim 17 wherein the collection includes signatures of the images and wherein images are duplicates when they have the same signature.
Type: Application
Filed: Dec 20, 2011
Publication Date: Apr 19, 2012
Applicant: Microsoft Corporation (Redmond, WA)
Inventors: Ming Jing Li (Beijing), Wei-Ying Ma (Beijing), Xing Xie (Beijing), Xin Fan (Hefei), Zhiwei Li (Beijing)
Application Number: 13/332,248
International Classification: G06K 9/00 (20060101); G06K 9/54 (20060101);