MULTI-MODAL QUERY REFINEMENT

- Microsoft

A multi-modal search query refinement system (and corresponding methodology) is provided. In accordance with the innovation, query suggestion results represent a word palette which can be used to select strings for inclusion or exclusion from a refined set of results. The system employs text, speech, touch and gesture input to refine a set of search query results. Wildcards can be employed in the search either prompted by the user or inferred by the system. Additionally, partial knowledge supplemented by speech can be employed to refine search results.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent application Ser. No. 61/053,214 entitled “MULTI-MODALITY SEARCH INTERFACE” and filed May 14, 2008. This application is related to pending U.S. patent application Ser. No. ______ entitled “MULTI-MODAL QUERY GENERATION” filed on ______ and to pending U.S. patent application Ser. No. ______ entitled “MULTI-MODAL SEARCH WILDCARDS” filed on ______. The entireties of the above-noted applications are incorporated by reference herein.

BACKGROUND

The Internet continues to make available ever-increasing amounts of information which can be stored in databases and accessed therefrom. With the proliferation of mobile and portable terminals (e.g., cellular telephones, personal data assistants (PDAs), smartphones and other devices), users are becoming more mobile, and hence, more reliant upon information accessible via the Internet. Accordingly, users often search network sources such as the Internet from their mobile device.

There are essentially two phases in an Internet search. First, a search query is constructed that can be submitted to a search engine. Second the search engine matches this search query to actual search results. Conventionally, these search queries were constructed merely of keywords that were matched to a list of results based upon factors such as relevance, popularity, preference, etc.

The Internet and the World Wide Web continue to evolve rapidly with respect to both volume of information and number of users. As a whole, the Web provides a global space for accumulation, exchange and dissemination of information. As mobile devices become more and more commonplace to access the Web, the number of users continues to increase.

In some instances, a user knows the name of a site, server or URL (uniform resource locator) to the site or server that is desired for access. In such situations, the user can access the site, by simply typing the URL in an address bar of a browser to connect to the site. Oftentimes, the user does not know the URL and therefore has to ‘search’ the Web for relevant sources and/or URL's. To maximize likelihood of locating relevant information amongst an abundance of data, Internet or web search engines are regularly employed.

Traditionally, to locate a site or corresponding URL of interest, the user can employ a search engine to facilitate locating and accessing sites based upon alphanumeric keywords and/or Boolean operators. In aspects, these keywords are text- or speech-based queries, although, speech is not always reliable. Essentially, a search engine is a tool that facilitates web navigation based upon textual (or speech-to-text) entry of a search query usually comprising one or more keywords. Upon receipt of a search query, the search engine retrieves a list of websites, typically ranked based upon relevance to the query. To enable this functionality, the search engine must generate and maintain a supporting infrastructure.

Upon textual entry of one or more keywords as a search query, the search engine retrieves indexed information that matches the query from an indexed database, generates a snippet of text associated with each of the matching sites and displays the results to the user. The user can thereafter scroll through a plurality of returned sites to attempt to determine if the sites are related to the interests of the user. However, this can be an extremely time-consuming and frustrating process as search engines can return a substantial number of sites. More often than not, the user is forced to narrow the search iteratively by altering and/or adding keywords and Boolean operators to obtain the identity of websites including relevant information, again by typing (or speaking) the revised query.

Conventional computer-based search, in general, is extremely text-centric (pure text or speech-to-text) in that search engines typically analyze content of alphanumeric search queries in order to return results. These traditional search engines merely parse alphanumeric queries into ‘keywords’ and subsequently perform searches based upon a defined number of instances of each of the keywords in a reference.

Currently, users of mobile devices, such as smartphones, often attempt to access or ‘surf’ the Internet using keyboards or keypads such as, a standard numeric phone keypad, a soft or miniature QWERTY keyboard, etc. Unfortunately, these input mechanisms are not always efficient for the textual input to efficiently search the Internet. As described above, conventional mobile devices are limited to text input to establish search queries, for example, Internet search queries. Text input can be a very inefficient way to search, particularly for long periods of time and/or for very long queries.

SUMMARY

The following presents a simplified summary of the innovation in order to provide a basic understanding of some aspects of the innovation. This summary is not an extensive overview of the innovation. It is not intended to identify key/critical elements of the innovation or to delineate the scope of the innovation. Its sole purpose is to present some concepts of the innovation in a simplified form as a prelude to the more detailed description that is presented later.

The innovation disclosed and claimed herein, in one aspect thereof, comprises search systems (and corresponding methodologies) that can couple speech, text and touch for search interfaces and engines. In particular aspects, the multi-modal functionality can be used to refine search results thereby enhancing search functionality with minimal textual input. In other words, rather than being completely dependent upon conventional textual input, the innovation can combine speech, text, and touch to enhance usability and efficiency of search mechanisms. Accordingly, it can be possible to locate more meaningful and comprehensive results as a function of a search query.

In aspects, the innovation discloses a multi-modal search interface that tightly couples speech, text and touch by utilizing regular expression queries with ‘wildcards,’ where parts of the query can be input via different modalities, e.g., different modalities such as speech, text, and touch can be used at any point in the query construction process. In other aspects, the innovation can represent uncertainty in a spoken recognized result as wildcards in a regular expression query. In yet other aspects, the innovation allows users to express their own uncertainty about parts of their utterance using expressions such as “something” or “whatchamacallit” which then gets translated into wildcards.

In still other aspects, the innovation can be incorporated or retrofitted into existing search engines and/or interfaces. Additionally, the features, functionality and benefits of the innovation can be incorporated into mobile search applications which have strategic importance given the increasing usage of mobile devices as a primary computing device. As described above, mobile devices are not always configured or equipped with full-function keyboards, thus, the multi-modal functionality of the innovation can be employed to greatly enhance comprehensiveness of search.

In yet another aspect thereof, machine learning and reasoning is provided that employs a probabilistic and/or statistical-based analysis to prognose or infer an action that a user desires to be automatically performed.

To the accomplishment of the foregoing and related ends, certain illustrative aspects of the innovation are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the innovation can be employed and the subject innovation is intended to include all such aspects and their equivalents. Other advantages and novel features of the innovation will become apparent from the following detailed description of the innovation when considered in conjunction with the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates an example block diagram of a multi-modal search refinement system in accordance with an aspect of the innovation.

FIG. 2 illustrates an example flow diagram of procedures that facilitate query refinement in accordance with an aspect of the innovation.

FIG. 3 illustrates an example query administration component in accordance with an aspect of the innovation.

FIG. 4 illustrates an example query refinement component in accordance with an aspect of the innovation.

FIG. 5 illustrates an example screenshot that illustrates that the innovation can tightly couple touch and text for multi-modal refinement.

FIG. 6a-e illustrates an example word palette that helps a user compose and refine a search phrase from an n-best list.

FIG. 7a-c illustrates example text hints that help the speech recognizer to efficiently identify the query.

FIG. 8a-e illustrates example screenshots that show that words can be excluded and restored from retrieved results by touch.

FIG. 9a-e illustrates that a user can specify uncertain information using the word “something” in accordance with aspects.

FIG. 10 illustrates example recovery rates for using multi-modal refinement with a word palette in accordance with an aspect.

FIG. 11 illustrates example recovery rates for text hints of increasing number of characters in accordance with aspects.

FIG. 12 illustrates a block diagram of a computer operable to execute the disclosed architecture.

FIG. 13 illustrates a schematic block diagram of an exemplary computing environment in accordance with the subject innovation.

DETAILED DESCRIPTION

The innovation is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject innovation. It may be evident, however, that the innovation can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the innovation.

As used in this application, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.

As used herein, the term to “infer” or “inference” refer generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.

While certain ways of displaying information to users are shown and described with respect to certain figures as screenshots, those skilled in the relevant art will recognize that various other alternatives can be employed. The terms “screen,” “web page,” and “page” are generally used interchangeably herein. The pages or screens are stored and/or transmitted as display descriptions, as graphical user interfaces, or by other methods of depicting information on a screen (whether personal computer, PDA, mobile telephone, or other suitable device, for example) where the layout and information or content to be displayed on the page is stored in memory, database, or another storage facility.

The innovation disclosed and claimed herein, in aspects thereof, describe a method (and system) of presenting search query suggestions for a regular expression query with wildcards whereby not only are the best candidate phrase matches displayed, but each word in the displayed phrases are treated as a substitution choices for the words and/or wildcards in the query. In other words, the list of query suggestion results essentially acts as a kind of “word palette” with which users can select (e.g., via touch or d-pad) words to compose and/or refine queries. In aspects, users can drag and drop words into a query from the query suggestion list.

Still other aspects can employ a “refinement by exclusion” technique, where, if the user does not select any of the phrases in the query suggestion list, as can be implemented in an example “None of the above” choice, every word that was not selected can be treated as a word to exclude in retrieving more matches from the index. Another aspect employs a refinement technique above based on retrieving entries from a k-best suffix-array with constraints. These and other aspects will be described in greater detail in connection with the figures that follow.

Referring initially to the drawings, FIG. 1 illustrates an example block diagram of a system 100 that employs a multi-modal search refinement component 102 to refine search queries by selecting and/or excluding terms thereby rendering meaningful search results. It is to be understood that, as used herein, ‘multi-modal’ can refer to most any combination of text, voice, touch, gesture, etc. While examples described herein are directed to a specific multi-modal example that employs text, voice and touch only, it is to be understood that other examples exist that employ a subset of these identified modalities. As well, it is to be understood that other examples exist that employ disparate modalities in combination with or separate from those described herein. For instance, other examples can employ gesture input, image/pattern recognition, among others to refine a search query.

As shown the multi-modal search refinement component 102 can include a query administration component 104 and a search engine component 106, each of these sub-components 104, 106 can be referred to as a backend search system. Essentially, these subcomponents (104, 106) enable a user to refine a set of search results by way of multiple modalities, for example, text, voice, touch, etc. As described herein, a set of search results can be employed as a word palette thereby enabling users to refine, improve, filter or focus the results as desired. Features, functions and benefits of the innovation will be described in greater detail below. As will be described in greater detail infra, the query administration component 104 can employ multiple input modes to construct a search query, e.g., a wildcard query. The search engine component 106 can include backend components capable of matching query suggestions.

Internet usage, especially via mobile devices, continues to grow as users seek anytime, anywhere access to information. Because users frequently search for businesses, directory assistance has been the focus of conventional voice search applications utilizing speech as the primary input modality. Unfortunately, mobile usage scenarios often contain noise which degrades performance of speech recognition functionalities. Thus, the innovation presents a multi-modal search refinement component 102, a mobile search interface that not only can facilitate touch and text refinement whenever speech fails, but also allows users to assist the recognizer via text hints. For instance, a text hint can be used together with speech to better refine search queries.

The innovation can also take advantage of most any partial knowledge users may have about a business listing by letting them express their uncertainty in a simple, intuitive way. In simulation experiments conducted on actual voice search data, leveraging multi-modal refinement, resulted in a 28% relative reduction in error rate. Providing text hints along with the spoken utterance resulted in even greater relative reduction, with dramatic gains in recovery for each additional character.

As can be appreciated, according to market research, mobile devices are believed to be poised to rival desktop and laptop PCs as the dominant Internet platform, providing users with anytime, anywhere access to information. One common request for information is the telephone number or address of local businesses. Because perusing a large index of business listings can be a cumbersome affair using existing mobile text- and touch-based input mechanisms, directory assistance has been the focus of voice search applications, which utilize speech as the primary input modality. Unfortunately, mobile environments pose problems for speech recognition, even for native speakers. First, mobile settings often contain non-stationary noise which cannot be easily cancelled or filtered. Second, speakers tend to adapt to surrounding noise in acoustically unhelpful ways. Under such adverse conditions, task completion for voice search is less than stellar, especially in the absence of an effective correction user interface (UI) for dealing with speech recognition errors.

In light of the challenges of mobile voice search, the multi-modal search refinement system 102 can generate a series of user interfaces (UI) which assist in refinement of search queries. As will be described with reference to the figures that follow, the multi-modal UIs tightly couple speech with touch and text (as well as gestures, etc.) in at least two directions; users can not only use touch and text to refine their queries, whenever speech fails, but they can also use speech whenever text entry becomes burdensome. Essentially, the innovation can facilitate this tight coupling by transforming a typical n-best list, or a list of phrase alternates from the recognizer, into a palette of words with which users can compose and refine queries.

The innovation can also take advantage of any partial knowledge users may have about the words of the business listing. For example, a user may only remember that the listing starts with an “s” and also contains the word “avenue”. Likewise, the user may only remember “Saks something”, where the word “something” is used to express uncertainty about what words follow. While the word “something” is used in the aforementioned example, it is to be appreciated that most any desired word or indicator can be used without departing from the spirit/scope of the innovation and claims appended hereto. The innovation can represent this uncertainty as wildcards in an enhanced regular expression search of the listings, which exploits the popularity of the listings.

FIG. 2 illustrates a methodology of refining a search query in accordance with an aspect of the innovation. While, for purposes of simplicity of explanation, the one or more methodologies shown herein, e.g., in the form of a flow chart, are shown and described as a series of acts, it is to be understood and appreciated that the subject innovation is not limited by the order of acts, as some acts may, in accordance with the innovation, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the innovation.

At 202, search query suggestion results are received. Further, the query suggestion results can be categorized, ordered or otherwise organized in most any manner using most any ranking algorithm or methodology. In accordance with the innovation, the search results are representative of a ‘bag of word’ or a ‘word palette.’ In other words, the words within the search results themselves are search terms that can be used to further organize, sort, filter or otherwise refine the results.

In one example, a search term can be selected at 204—it is to be understood that this act can include selection of multiple terms from the list of results. In one aspect, the selected search term(s) can be used for inclusion within a refined query. In yet another example, at 206, a word or set of words can be selected for exclusion such that the refinement will exclude any results that employ the excluded word or set of words.

The selection for inclusion or other exclusion can employ most any suitable mechanism. For instance, the selection can be effected by way of a navigation device, touch screen, speech identification or the like. Additionally, it is to be understood that the selection (e.g., refinement criteria) can be placed upon or maintained in a separate location such as a “scratchpad.” In one example, the “scratchpad” can be the textbox in which the user may have entered text (e.g., if s/he utilized text), or it could be some other area suitable for the input modality. These and other conceivable examples are to be included within the scope of this disclosure and claims appended hereto.

At 208, the search term (or selected word(s)) can be supplemented, for example, by way of speech. Here, the selected word(s) can be combined with spoken words or phrases which define or further refine a search query. A decision is made at 210 to determine if additional refinement is desired. If so, the methodology returns to 204 as shown. If not, a refined set of search query suggestion results are received at 212 in view of the refined query. This methodology is repeatable until the user is satisfied with the refinement of the search query and subsequent results. The recursive characteristics of the methodology are illustrated by decision block 214.

Referring now to FIG. 3, an example block diagram of a query administration component 104 is shown. Generally, the query administration component 104 can include an analysis component 302 and a query refinement component 304. Together, these sub-components (302, 304) enable a user to use multi-modal mechanisms to refine search query suggestion results. In other words, as described above, the initial search results can be utilized as a word palette whereby users can refine the results (e.g., by selecting words for inclusion and/or exclusion).

The analysis component 302 is capable of providing instructions to the query refinement component 304 with regard to streamlining query suggestion results. In other words, the results can be streamlined by way of analysis of input characteristics as well as including and/or excluding terms. In other aspects, the words (or portions thereof) can be supplemented with verbally spoken cues. These spoken cues can essentially employ the text as word hints thereby refining the search query results as desired.

As shown in the example of FIG. 4, the query refinement component 304 can include a selection component 402, an exclusion component 404 and a query update component 406. Together, these sub-components enable search query refinement. As described above, search query suggestion results are parsed into a word palette where each of the words or segments of words can be used to comprehensively refine the results. In other words, rather than resubmitting a revised query to a search engine, the query refinement component 304 enables a user to employ the actual words included in an original set of results to drill down or further refine a set of results.

The selection component 402 enables a user to choose one or more words from a set (e.g., word palette) that represents the words from within a set of search results. In aspects, words can be selected using navigation devices such as a mouse, trackball, touchpad or the like. Additionally, navigational keys, verbal identification, gestures, etc. can be employed to select words. Once the words are selected, these words can be used to identify words for inclusion within a refined set of results.

Alternatively, the exclusion component 402 can be used to designate a specific word, or group of words, to be excluded from a refined set of results. In other words, once a word is designated as excluded, a refined set of results can be located thereby comprehensively adjusting the user's initial query. It will be understood that this functionality can be incorporated with a specialized search engine/mechanism. Alternatively, the features, functions and benefits of the innovation can be incorporated into, or used in connection with, conventional search engine/mechanisms.

The query update component 406 employs information and instructions received from the selection component 402 and/or the exclusion component 404 to refine the query resultant set. In addition to information received from the selection and/or exclusion components (402, 404), the query update component 406 can also receive multi-modal instructions directly from an entity or user. In aspects, selected (e.g., for inclusion) and/or excluded terms can be complemented by other text entries, speech entries, etc. thereby assisting a user in efficiently refining search results to obtain a set of meaningful results.

This disclosure is focused on three phases. First, a description of the system 100 architecture and contrast that with the typical architecture of conventional voice search applications. The specification also details the backend infrastructure deployed onto a device for fast and efficient retrieval of the search query suggestion results. Second, the disclosure presents an example UI, highlighting its tightly coupled multi-modal refinement capabilities and support of partial knowledge with several user scenarios. Third, the system is evaluated by conducting simulation experiments examining the effectiveness of multi-modal refinement in recovering from speech errors on utterances collected from a previously deployed mobile voice search product.

Referring first to the example UI illustrated in FIG. 5, as shown, the innovation is capable of tightly coupling multiple modalities, for example, text, speech and touch. As shown, the UI enables a search for two words beginning with the letters of ‘b’ and ‘n.’ In other words, the innovation infers wildcard suffixes for each of the two letters and returns results that contain matching content. As will be described below, this functionality can also be used to refine initial search results. While specific examples are shown and described, it is to be understood that these examples are provided to add perspective to the innovation and not intended to limit the innovation in any manner. Rather, it is to be understood that additional (and countless) examples exist which are to be included within the scope of the innovation and claims appended hereto.

As shown in FIG. 5, a user can be presented with an n-best list of results in response to a search query for ‘b’ and ‘n.’ These results can be effectively understood as based upon an interpretation of ‘b*’ and ‘n*,’ wherein the asterisks represent wildcards, and a wildcard matches zero, one, or more arbitrary characters.

In contrast to typical search systems, the innovation leverages the use of the n-best list as a word palette such that results can be easily refined by way of selecting, excluding, parsing, or supplementing words (or portions thereof). The n-best list is essentially treated as a sort of word palette from which users can select those words that the recognizer heard correctly, though they may appear in a different phrase. For example, suppose a user says “home depot,” but because of background noise, the phrase does not occur in the n-best list. Suppose, however, that the phrase “home office design” does. With typical (or conventional) voice search applications, the user would have to start over.

In accordance with the innovation, the user can simply select the word “home” and invoke the backend which finds the most popular listings that contain the word. In one aspect, the system can measure popularity by the frequency with which a business listing appears in the ADA call logs. In order to retrieve the most popular listings that contain a particular word or substring, regular expressions can be used. Because much of the effectiveness of the innovation's interface rests on its ability to retrieve listings using a wildcard query—or a regular expression query containing wildcards—a discussion follows that describes implementation of a RegEx engine followed by further details about wildcard queries constructed in the RegEx generator.

Turning now to a discussion of a RegEx engine, one example index data structure used for regular expression matching is based on k-best suffix arrays. Similar to traditional suffix arrays, k-best suffix arrays arrange all suffixes of the listings into an array. While traditional suffix arrays arrange the suffixes in lexicographical order only, the k-best suffix arrays arrange the suffixes according to two alternating orders—a lexicographical ordering and/or an ordering based on a figure of merit such as popularity, preference (determined or inferred), etc.

Because the k-best suffix array is sorted by both lexicographic order and popularity, it is a convenient structure for finding the most popular matches for a substring, especially when there are many matches. In an aspect, the k most popular matches can be found in time close to O(log N) for most practical situations, and with a worst case guarantee of O(sqrt N), where N is the number of characters in the listings. In contrast, a standard suffix array permits finding all matches to a substring in O(log N) time, but does not impose any popularity ordering on the matches. To find the most popular matches, the user would have to traverse them all.

Consider a simple example which explains why this subtle difference is important to the application. The standard suffix array may be sufficiently fast when searching for the k-best matches to a large substring since there will not be many matches to traverse in this case. The situation is, however, completely different for a short substring such as, for example, ‘a’. In this case, a user would have to traverse all dictionary entries containing an ‘a’, which is not much better than traversing all suffixes in the listings—in O(N) time. With a clever implementation, it is possible to continue a search in a k-best suffix array from the position it was previously stopped. A simple variation of k-best suffix matching will therefore allow look up the k-best (e.g., most popular) matches for an arbitrary wildcard query, such as, for instance ‘f* m* ban*’. The approach proceeds as the k-best suffix matching above for the largest substring without a wildcard (‘ban’). At each match, the innovation now evaluates the full wildcard query against the full listing entry for the suffix, and continues the search until k valid expansions to the wildcard query are found.

The k-best suffix array can also be used to exclude words in the same way by continuing the search until expansions without the excluded words are found. The query refinement is an iterative process, which gradually eliminates the wildcards in the text string. Whenever the largest substring in the wildcard query does not change between iterations, there is an opportunity to further improve the computational efficiency of the expansion algorithm. In this case, the k-best suffix matching can just be continued from the point where the previous iteration ended.

With an efficient k-best suffix array matching algorithm in hand for the RegEx engine, it can be deployed, for example onto a mobile device, because of latencies associated with sending information back and forth along a wireless data channel. It will be appreciated that speech recognition for ADA already takes several seconds to return an n-best list. It is desirable to provide short latencies for wildcard queries. While many of the examples described herein are directed to ADA, it is to be understood that other aspects exist, for example general Internet search, without departing from the spirit and/or scope of the innovation and claims appended hereto.

Turning now to a discussion of an IR (information retrieval) or search engine, besides wildcard queries, which provide exact matches to the listings, it is useful to also retrieve approximate matches to the listings. For at least this purpose, the innovation can implement an IR engine based on an improved term frequency—inverse document frequency (TFIDF) algorithm. As described above, what is important to note about the IR engine is that it can treat queries and listings as bags of words. This is advantageous when users either incorrectly remember the order of words in a listing, or add additional words that do not actually appear in the listing. This is not the case for the RegEx engine where order and the presence of suffixes in the query matter.

Referring now to the RegEx generator, returning to the example in which a user selects the word “home” for “home depot” from the word palette, once the user invokes the backend, the word is sent as a query to a RegEx generator which transforms it into a wildcard query. For single phrases, the generator can simply insert wildcards before spaces, as well as to the end of the entire query. For example, for the query “home”, the generator could produce the regular expression “home*”.

For a list of phrases, such as an n-best list from a speech recognizer, the query refinement component 304 can generate a wildcard using minimal edit distance (with equal edit operation costs) to align the phrases at the word level. Once words are aligned, minimal edit distance is again applied to align the characters. Whenever there is disagreement between any aligned words or characters, a wildcard can be substituted in its place. For example, for an n-best list containing the phrases “home depot” and “home office design,” the RegEx generator would produce “home* de*”. After an initial query is formulated, the RegEx generator applies a heuristics to clean up the regular expression (e.g., no word would have more than one wildcard) before it is used to retrieve k-best matches from the RegEx engine. The RegEx generator (or query refinement component 304) is invoked in this form whenever speech is utilized, such as for leveraging partial knowledge, as will be discussed below.

As discussed above, the innovation displays an n-best list to the user, prompting a UI to appear, at least at first blush, similar to most any other voice search application. However, in accordance with the innovation, users may select words or phrases (or portions of words or phrases) from a list of choices, provided that it exists among these choices. Because re-speaking does not generally increase the likelihood that the utterance will be recognized correctly, and furthermore, because mobile usage poses distinct challenges not encountered in desktop settings, the interface also endows users with a larger arsenal of recovery strategies.

The following user scenarios are highlighted to demonstrate at least two concepts: first, tight coupling of speech with touch and text, so that whenever one of the three modalities fails or becomes burdensome, users may switch to another modality in a complementary way. Second, the scenarios illustrate the ability to leverage any partial knowledge a user may have about constituent words of their intended query.

Turning to a discussion of refinement using the word palette, FIG. 5 illustrates how users can leverage the word palette discussed in the previous section for multi-modal refinement. Suppose a user utters “first mutual bank.” (FIG. 6a). The system returns an n-best list that unfortunately does not include the intended utterance. It will be appreciated that a number of factors can contribute to the incorrect interpretation, for example, background noise, inadequacy of the voice recognition application/functionality, lack of clarity in spoken words/phrases, etc.

As shown, it is possible that the n-best list does include parts of the utterance in the choice “2. source mutual bank.” As such, the user can now select the word “mutual” (FIG. 6b) and then “bank” (FIG. 6c) which gets added to the query textbox in the order selected. The textbox functions as a scratch pad upon which users can add and edit words until they click the search button (or other trigger) on the top left-hand side (FIG. 6d) to refine the query. At this point, the query in the textbox is submitted to the backend which retrieves a new set of results containing both exact matches of the query from the RegEx engine as well as approximate matches from the IR engine. This new result list of query suggestions can appear or otherwise be presented in the same manner as the initial n-best list except that words matching the query are highlighted in red (or other identifying or highlight manner). Given that the intended query is now among the list of choices, the user simply selects the choice and is finished (FIG. 6e).

As stated supra, and illustrated in FIGS. 7a-c, the innovation supports refinement with text hints. Just in the way that users can resort to touch and text when speech fails, they can also resort to speech whenever typing becomes burdensome, or when they feel they have provided enough text hints for the recognizer to identify their query. FIGS. 7a-c illustrate how text hints can be leveraged.

Here, as shown in FIG. 7a, the user starts typing “m” for the intended query “mill creek family practice,” but because the query is too long to type, the user utters the intended query after pressing the ‘Refine’ soft key button at the bottom of the screen (FIG. 7b). After the query returns from the backend, all choices in the list now start with an “m” and indeed include the user utterance (FIG. 7c).

The innovation can achieve this functionality by first converting the text hint in the textbox into a wildcard query and then using that to filter the n-best list from the speech recognition as well as to retrieve additional matches from the RegEx engine. In principle, the innovation acknowledges that the query should be used to bias the recognition of the utterance in the speech engine itself.

Turning to a discussion of refinement by exclusion, in certain situations, users may invoke the backend when they think there is sufficient information to retrieve their desired query in one pass, but find that their query does not show up among the choices, perhaps due to lack of popularity. Typically, users would have to provide more information and try again. Contrary to conventional approaches, the innovation supports this but also adds the ability to exclude words from retrieved result (see the RegEx discussion above for details). For example, in FIG. 8a, the user is looking for “pure networks” so he types “p n” and invokes the backend. The most popular exact matches to regular expression “p* n*” are retrieved and displayed to the user. Seeing that none of the choices contain the intended query, or even part of the query, the user selects “None of the above” (FIG. 8b). This tells the backend to retrieve more results for the query “p* and n*”, but exclude all queries that contain words from the previous query suggestions, such as “princess” and “northwest.”

As such, the system not only returns more results to the user, this time containing the correct query (FIG. 8c), but also creates a new tab to hold all the excluded words (FIG. 8d). If the user has accidentally excluded a word, they can peruse the Excluded Words tab (FIG. 8d) and select the word to remove it from the tab, which will then bring up a new set of choices.

Note that if the user observes that part of the intended query shows up among the choices, similar to the word palette scenario, the user can select the word and the UI will fill in whatever query part matches it. For example, suppose the user was looking for “pacific networks.” If the user selects the word “pacific” in “3. pacific northwest ballet,” the query in the textbox will change from “p n” to “pacific n,” (FIG. 8e), at which point the user may choose to invoke the backend.

With reference to leveraging partial knowledge, sometimes users may not remember exactly the name of the listing they are looking for, but only parts of it. For example, they may remember that the first word begins with “pacific” and some word thereafter starts with an “n.” The previous user scenario shows that the innovation UI supports this kind of search with text. In this scenario, the innovation demonstrates that the interface also enables this kind of search with speech. In FIG. 9, the user is looking for “black angus restaurant” but only remembers that the first word is “black.” Here, the user can simply say, “black ‘something’ restaurant” (FIG. 9a). Noticing that there is no “black something restaurant” in the listings, the innovation will automatically convert the “something” into a wildcard and retrieve exact matches from the RegEx Engine along with approximate matches from the IR Engine (FIG. 9b). Now, the query appears among the choices and the user simply selects the appropriate choice and is finished (FIG. 9c).

In order to support the recognition of “something” expressions of uncertainty, the innovation adjusts the statistical language model to allow for transitions to the word “something” before and after every word in the training sentences as a bigram. Business listings that actually contain the word “something” were far and few, and appropriately tagged to not generate a wildcard during inverse text normalization of the recognized result. The innovation can also transform the training sentences into one character prefixes so that it could support partial knowledge queries such as, “b something angus” for “b* angus” (FIGS. 9d-e).

The innovation interface can be referred to as “taming” speech recognition errors with a multi-modal interface. Although the innovation was designed with mobile voice search in mind, in certain situations, it may make sense to exploit richer gestures other than simply selecting via touch or d-pad. For example, users could use gestures to separate words that they want in their query from those they wish to exclude. It is to be understood that the features, functions and benefits of the innovation can be applied to most any search scenario, including desktop based search, without departing from the spirit and/or scope of the innovation and claims appended hereto.

The following research results are included to provide perspective as to the usefulness of the innovation—these research results are not intended to limit the innovation in any manner. Apart from switching modalities, a fair amount of research has been devoted to simultaneous multi-modal disambiguation. In accordance with the innovation, text hints could be construed as a way of fusing speech and text, though technically, the text could bias the internal processing of the speech recognizer.

In order to assess the effectiveness of the subject innovation in recovering from speech recognition errors, simulation experiments on utterances collected from a deployed mobile voice search product were conducted; namely, Microsoft Live Search Mobile, which provides not only ADA but also maps, driving directions, movie times and local gas prices. Besides capturing the difficult acoustic conditions inherent in mobile environments, the collected utterances also represent a random sampling of speaker accents, speaker adaptation to surrounding noise, and even the variable recording quality of different mobile devices.

2317 local-area utterances were collected which had been transcribed by a professional transcription service and filtered to remove noise-only and yes-no utterances. The utterances were systematically collected to cover all days of the week as well as times in the day. For all of the simulation experiments, the utterances were submitted to a speech server which utilized the same acoustic and language models as Live Search Mobile. Of the 2317 utterances, the transcription appeared in the top position of the n-best list 72.0% of the time, and somewhere in the n-best list 80.0% of the time, where again n was limited to 8 (the number of readable choices displayable on a standard pocket PC (personal computer) form factor). As summarized in Table 1 below, in 20% of the utterances, the transcription did not appear at all in the n-best list. These failure cases constituted an opportunity for recovering from error, given that the innovation performs the same as the existing product for the other 80% of the cases.

TABLE 1 A breakdown of the simulation test data. Case Frequency Percentage Top 1 High Conf (Bull's Eye) 545 24% Top 1 Med + Low Conf 1125 48% Top N 183 8% All Wrong 464 20% Total: 2317

With regard to refinement with the word palette, looking at just the failure cases, the first set of experiments conducted examined how much recovery rate could be gained by treating the n-best list as a word palette and allowing users to select words. Although the interface itself enables users to also edit their queries by inserting or deleting characters, this was not permitted for the experiments. Words in the n-best list that matched the transcription were always selected in the proper word order. For example, if the transcription was “black angus restaurant,” “black” was selected from the n-best list first before selecting either “angus” or “restaurant.” Furthermore, as many words from the transcription as could be found in the word palette were selected. For instance, although just “black” could have been submitted as a query in the previous example, because “angus” could also be found in the n-best list, it was included.

As shown in FIG. 10, words in the n-best list alone (without supplementary matches from the backend) covered the full transcription 4.31% of the time. Note that full coverage constitutes recovery from the error since the transcription can be completely built up word by word from the n-best list. In 24.6% of the cases, only part of the transcription could be covered by the n-best list (not shown in FIG. 10), in which case, another query would need to be submitted to get a new list. If the n-best list is supplemented with matches from the backend, an improvement in the recovery rate to 14.4%, which is a factor of 3.4 over using the n-best list as a word palette is obtained. For the transcriptions which were only partially covered by the n-best list, if the backend is utilized using whatever words could be found, the recovery rate jumps to 25.2%. If the n-best list is used padded by supplementary matches to submit a query, the recovery rate is 28.5% (which is also the relative error reduction with respect to the entire data set). This is 5.9 times the recovery rate of using just the n-best list as a word palette.

Looking at text hints with speech, before examining the effect of providing text hints on the recovery rate, as a baseline, it was first considered how well the innovation could recover from an error by just retrieving the top 8 most popular listings from the index. This is shown in FIG. 11 in the 0 character column. Surprisingly, guessing the top 8 most popular listings resulted in a recovery rate of 14.4%. FIG. 7a shows these listings as a default list when starting, which includes the general category “pizza” as well as popular stores such as “wal-mart” and “home depot.” Below is a discussion of the implication of such a high baseline.

In applying text hints for the experiment, used was a simple left-to-right assignment of prefixes for generating a wildcard query that proceeded as follows: Given m characters to assign, a character can be assigned to the prefix of each word in the transcription. If there were still characters left to assign, the innovation would loop back to the beginning word of the transcription and continue. For example, for a 3 character text hint, if the transcription contained three words such as “black angus restaurant”, the innovation would assign prefix characters for all three words; namely, “b* a* r*”. If, on the other hand, the transcription was “home depot,” the innovation would loop back to the first word so that the generated wildcard query would be “ho* d*”. After generating a wildcard query for the text hint from the transcription, the innovation used it to filter the n-best list obtained from submitting the transcription utterance to the speech server. If there were enough list items after the filtering, the list was supplemented as described in the Supplement generator description above.

When a 1-character text hint is used along with the spoken utterance, as shown in FIG. 11, the recovery rate jumps to almost 50%, with 16.8% of the transcriptions appearing in the top position of the result list. That is 3.4 times better than guessing the listing using no text hints. As more and more characters are used in the text hint, the recovery rate climbs to as high as 92.7% for 3 characters.

It will be understood that, oftentimes, users consistently asked for popular listings. As such, because the backend utilizes popularity to retrieve k-best matches, the correct answer was frequently obtained. This may be because users have found that low popularity listings do not get recognized as well as high popularity listings, so they do not even bother with those. Or it may be that popular listings are precisely popular because they get asked frequently. In any case, by providing users with a multi-modal interface, that facilitates a richer set of recovery strategies, they will be encouraged to try unpopular queries as well as popular ones.

In this disclosure, the system, a multi-modal interface system that can be used for voice search applications (e.g., mobile voice search) is presented. This innovation not only facilitates touch and text refinement whenever speech fails (or accuracy is compromised), but also allows users to assist the recognizer via text hints. The innovation can also take advantage of any partial knowledge users may have about their queries by letting them express their uncertainty through “something” expressions. Also discussed was an example overall architecture and details of how the innovation could quickly retrieve exact and approximate matches to the listings from the backend. Finally, in evaluating the innovation via simulation experiments conducted on real mobile voice search data, the innovation found that leveraging multi-modal refinement using the word palette resulted in a 28% relative reduction in error rate. Furthermore, providing text hints along with a spoken utterance resulted in dramatic gains in recovery rate, though this should be qualified by stating that users in the test data tended to ask for popular listings which we could retrieve quickly.

As described supra, in mobile device aspects, voice search applications encourage users to “just say what you want” in order to obtain useful mobile content such as automated directory assistance (ADA). Unfortunately, when users only remember part of what they are looking for, they are forced to guess, even though what they know may be sufficient to retrieve the desired information. In this disclosure, it is proposed to expand the capabilities of voice search to allow users to explicitly express their uncertainties as part of their queries, and as such, to provide partial knowledge. Applied to ADA, the disclosure highlights the enhanced user experience uncertain expressions affords and delineates how to perform language modeling and information retrieval. As described in detail above, the innovation evaluates an approach by assessing its impact on overall ADA performance and by discussing the results of an experiment in which users generated both uncertain expressions as well as guesses for directory listings. Uncertain expressions reduced relative error rate by 31.8% compared to guessing.

Referring now to FIG. 12, there is illustrated a block diagram of a computer operable to execute the disclosed architecture. In order to provide additional context for various aspects of the subject innovation, FIG. 12 and the following discussion are intended to provide a brief, general description of a suitable computing environment 1200 in which the various aspects of the innovation can be implemented. While the innovation has been described above in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that the innovation also can be implemented in combination with other program modules and/or as a combination of hardware and software.

Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.

The illustrated aspects of the innovation may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.

A computer typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.

Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.

With reference again to FIG. 12, the exemplary environment 1200 for implementing various aspects of the innovation includes a computer 1202, the computer 1202 including a processing unit 1204, a system memory 1206 and a system bus 1208. The system bus 1208 couples system components including, but not limited to, the system memory 1206 to the processing unit 1204. The processing unit 1204 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as the processing unit 1204.

The system bus 1208 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. The system memory 1206 includes read-only memory (ROM) 1210 and random access memory (RAM) 1212. A basic input/output system (BIOS) is stored in a non-volatile memory 1210 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer 1202, such as during start-up. The RAM 1212 can also include a high-speed RAM such as static RAM for caching data.

The computer 1202 further includes an internal hard disk drive (HDD) 1214 (e.g., EIDE, SATA), which internal hard disk drive 1214 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 1216, (e.g., to read from or write to a removable diskette 1218) and an optical disk drive 1220, (e.g., reading a CD-ROM disk 1222 or, to read from or write to other high capacity optical media such as the DVD). The hard disk drive 1214, magnetic disk drive 1216 and optical disk drive 1220 can be connected to the system bus 1208 by a hard disk drive interface 1224, a magnetic disk drive interface 1226 and an optical drive interface 1228, respectively. The interface 1224 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies. Other external drive connection technologies are within contemplation of the subject innovation.

The drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the computer 1202, the drives and media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable media above refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the exemplary operating environment, and further, that any such media may contain computer-executable instructions for performing the methods of the innovation.

A number of program modules can be stored in the drives and RAM 1212, including an operating system 1230, one or more application programs 1232, other program modules 1234 and program data 1236. All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM 1212. It is appreciated that the innovation can be implemented with various commercially available operating systems or combinations of operating systems.

A user can enter commands and information into the computer 1202 through one or more wired/wireless input devices, e.g., a keyboard 1238 and a pointing device, such as a mouse 1240. Other input devices (not shown) may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like. These and other input devices are often connected to the processing unit 1204 through an input device interface 1242 that is coupled to the system bus 1208, but can be connected by other interfaces, such as a parallel port, an IEEE 1394 serial port, a game port, a USB port, an IR interface, etc.

A monitor 1244 or other type of display device is also connected to the system bus 1208 via an interface, such as a video adapter 1246. In addition to the monitor 1244, a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc.

The computer 1202 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 1248. The remote computer(s) 1248 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 1202, although, for purposes of brevity, only a memory/storage device 1250 is illustrated. The logical connections depicted include wired/wireless connectivity to a local area network (LAN) 1252 and/or larger networks, e.g., a wide area network (WAN) 1254. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.

When used in a LAN networking environment, the computer 1202 is connected to the local network 1252 through a wired and/or wireless communication network interface or adapter 1256. The adapter 1256 may facilitate wired or wireless communication to the LAN 1252, which may also include a wireless access point disposed thereon for communicating with the wireless adapter 1256.

When used in a WAN networking environment, the computer 1202 can include a modem 1258, or is connected to a communications server on the WAN 1254, or has other means for establishing communications over the WAN 1254, such as by way of the Internet. The modem 1258, which can be internal or external and a wired or wireless device, is connected to the system bus 1208 via the serial port interface 1242. In a networked environment, program modules depicted relative to the computer 1202, or portions thereof, can be stored in the remote memory/storage device 1250. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.

The computer 1202 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This includes at least Wi-Fi and Bluetooth™ wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.

Wi-Fi, or Wireless Fidelity, allows connection to the Internet from a couch at home, a bed in a hotel room, or a conference room at work, without wires. Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station. Wi-Fi networks use radio technologies called IEEE 802.11 (a, b, g, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet). Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands, at an 11 Mbps (802.11a) or 54 Mbps (802.11b) data rate, for example, or with products that contain both bands (dual band), so the networks can provide real-world performance similar to the basic 10 BaseT wired Ethernet networks used in many offices.

Referring now to FIG. 13, there is illustrated a schematic block diagram of an exemplary computing environment 1300 in accordance with the subject innovation. The system 1300 includes one or more client(s) 1302. The client(s) 1302 can be hardware and/or software (e.g., threads, processes, computing devices). The client(s) 1302 can house cookie(s) and/or associated contextual information by employing the innovation, for example.

The system 1300 also includes one or more server(s) 1304. The server(s) 1304 can also be hardware and/or software (e.g., threads, processes, computing devices). The servers 1304 can house threads to perform transformations by employing the innovation, for example. One possible communication between a client 1302 and a server 1304 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The data packet may include a cookie and/or associated contextual information, for example. The system 1300 includes a communication framework 1306 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1302 and the server(s) 1304.

Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 1302 are operatively connected to one or more client data store(s) 1308 that can be employed to store information local to the client(s) 1302 (e.g., cookie(s) and/or associated contextual information). Similarly, the server(s) 1304 are operatively connected to one or more server data store(s) 1310 that can be employed to store information local to the servers 1304.

What has been described above includes examples of the innovation. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the subject innovation, but one of ordinary skill in the art may recognize that many further combinations and permutations of the innovation are possible. Accordingly, the innovation is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.

Claims

1. A system that facilitates multi-modal search query refinement, comprising:

a query administration component that employs a plurality of modalities to refine a list of query suggestion results into a regular expression query; and
a search query suggestion engine component that evaluates the regular expression query and renders a list of refined query suggestion results as a function of the evaluation.

2. The system of claim 1, wherein the regular expression query includes at least one wildcard.

3. The system of claim 1 wherein the list of query suggestion results includes a list of best candidate matches that represent a word palette that enables individual string selection, where the string can be a word or part of a word.

4. The system of claim 1, further comprising:

an analysis component that evaluates the regular expression query; and
a query generation component that renders a list of refined query suggestion results.

5. The system of claim 1, further comprising a selection component that enables a user to choose at least one word from the list of query suggestion results, wherein the selection facilitates refinement of the regular expression query where refinement can include any deletion, substitution, or addition of characters to the original query, and wherein the refinement is maintained in a separate area as a “scratchpad.”

6. The system of claim 5, wherein the selection is accomplished by way of a drag/drop procedure.

7. The system of claim 5, wherein at least a portion of the selection identifies an exclusion used as a parameter to establish the list of refined query suggestion results.

8. The system of claim 1, wherein a k-best suffix-array with exclusion constraints is used to generate the list of refined query suggestion results.

9. The system of claim 1, wherein the plurality of modalities includes at least two of text, touch, speech and gesture.

10. The system of claim 1, wherein the list of refined query suggestion results includes an n-best list or alternates list from a speech recognizer as well as a list of supplementary results that includes at least one of an ‘exact’ match via a wildcard expression or an ‘approximate’ match via information retrieval algorithms.

11. The system of claim 10, wherein at least part of the n-best list obtained from the speech recognizer is submitted as a query to an information retrieval algorithm that is indifferent to the order of words in the regular expression query.

12. The system of claim 1, wherein the query administration component employs user generated text to constrain speech recognition upon generating the regular expression query.

13. The system of claim 1, further comprising an artificial intelligence (AI) component that employs at least one of a probabilistic and a statistical-based analysis that infers an action that a user desires to be automatically performed.

14. A computer-implemented method of search refinement, comprising:

receiving a selection related to a plurality of words in a set of query suggestion results, wherein the selection defines a refinement of an original query;
establishing a regular expression query based upon a subset of the selection; and
rendering a plurality of refined query suggestion results based upon the regular expression query.

15. The computer-implemented method of claim 14, wherein the selection is effectuated by at least one of text, speech, touch or gesture and wherein the refinement is maintained upon a “scratchpad.”

16. The computer-implemented method of claim 14, further comprising excluding a subset of words in the set of results, wherein the excluded words define a parameter for the plurality of results.

17. The computer-implemented method of claim 14, further comprising:

receiving an input that supplements the selection;
converting a portion of the input into a wildcard; and
retrieving a subset of the plurality of refined query suggestion results based upon the wildcard.

18. A computer-executable system of refining search queries, comprising:

means for rendering query suggestion results as a plurality of selectable words;
means for choosing a subset of the plurality of selectable words;
means for refining the query suggestion results based at least in part upon the chosen subset of selectable words.

19. The computer-executable system of claim 18, further comprising:

means for designating at least a portion of the chosen subset of selectable words as exclusions, wherein the exclusions are employed to retrieve the refined query suggestion results.

20. The computer-executable system of claim 19, wherein the means for selecting is a drag/drop procedure of selecting each of the subset of the plurality of selectable words.

Patent History
Publication number: 20090287680
Type: Application
Filed: Aug 28, 2008
Publication Date: Nov 19, 2009
Applicant: MICROSOFT CORPORATION (Redmond, WA)
Inventors: Timothy Seung Yoon Paek (Sammamish, WA), Bo Thiesson (Woodinville, WA), Yun-Cheng Ju (Bellevue, WA), Bongshin Lee (Issaquah, WA), Christopher A. Meek (Kirkland, WA)
Application Number: 12/200,584