Patents by Inventor Tao Mei
Tao Mei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20150356199Abstract: The description relates to click-through-based cross-view learning for internet searches. One implementation includes determining distances among textual queries and/or visual images in a click-through-based structured latent subspace. Given new content, results can be sorted based on the distances in the click-through-based structured latent subspace.Type: ApplicationFiled: July 3, 2014Publication date: December 10, 2015Applicant: MICROSOFT CORPORATIONInventors: Tao MEI, Yong RUI, Linjun YANG, Ting YAO
-
Patent number: 9152709Abstract: Some examples include receiving a microblog entry from a social stream domain. Further, some implementations include determining, based on a topic space associated with the social stream domain and a media domain, a topic that is associated with the microblog entry. Some implementations include determining, based on the topic space, one or more media items that are associated with the topic.Type: GrantFiled: February 25, 2013Date of Patent: October 6, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Tao Mei, Shipeng Li, Suman Deb Roy, Wenjun Zeng
-
Publication number: 20140354768Abstract: A system, method or computer readable storage device to enable mobile devices in capturing high quality photos by using both the rich context available from mobile devices and crowd-sourced social media on the Web. Considering the flexible and adaptive adoption of photography principles with different content and context composition rules and exposure principles are learned from the community-contributed images. Leveraging a mobile device user's scene context and social context, the proposed socialized mobile photography system is able to suggest optimal view enclosure to achieve appealing composition. Due to the complex scene content and a number of shooting-related contexts to exposure parameters, exposure learning is applied to suggest appropriate camera parameters.Type: ApplicationFiled: May 30, 2013Publication date: December 4, 2014Inventors: Tao Mei, Shipeng Li, Wenyuan Yin, Chang Wen Chen
-
Publication number: 20140289228Abstract: A user behavior model provides personalized recommendations based in part on time and location, particularly to users of mobile devices. Entity types are ranked according to relevance to the user. Example entity types are restaurant, hotel, etc. The relevance may be based on reference to a large-scale database containing queries from other users. Additionally, entities within each entity type may be ranked based on relevance to the user and the time and location context. A user interface may display a ranked list of entity types, such as restaurant, hotel, etc., wherein each entity type is represented by a highest-ranked entity with the entity type. Thus, the user interface may display a highest-ranked restaurant, a highest-ranked hotel, etc. Upon user selection of one such entity type the user interface is replaced with a second user interface, for example showing a ranked hierarchy of restaurants, headed by the highest-ranked restaurant.Type: ApplicationFiled: June 9, 2014Publication date: September 25, 2014Inventors: Tao Mei, Ying-Qing Xu, Shipeng Li, Jinfeng Zhuang, Bo Zhang, Peng Xu
-
Patent number: 8831349Abstract: A user may perform an image search on an object shown in an image. The user may use a mobile device to display an image. In response to displaying the image, the client device may send the image to a visual search system for image segmentation. Upon receiving a segmented image from the visual search system, the client device may display the segmented image to the user who may select one or more segments including an object of interest to instantiate a search. The visual search system may formulate a search query based on the one or more selected segments and perform a search using the search query. The visual search system may then return search results to the client device for display to the user.Type: GrantFiled: September 5, 2013Date of Patent: September 9, 2014Assignee: Microsoft CorporationInventors: Tao Mei, Shipeng Li, Ying-Qing Xu, Ning Zhang, Zheng Chen, Jian-Tao Sun
-
Publication number: 20140250120Abstract: A facility for visual search on a mobile device takes advantage of multi-modal and multi-touch input on the mobile device. By extracting lexical entities from a spoken search query and matching the lexical entities to image tags, the facility provides candidate images for each entity. Selected ones of the candidate images are used to construct a composite visual query image on a query canvas. The relative size and position of the selected candidate images in the composite visual query image, which need not be an existing image, contribute to a definition of a context of the composite visual query image being submitted for context-aware visual search.Type: ApplicationFiled: November 24, 2011Publication date: September 4, 2014Applicant: Microsoft CorporationInventors: Tao Mei, Jingdong Wang, Shipeng Li, Yang Wang
-
Publication number: 20140244614Abstract: Some examples include receiving a microblog entry from a social stream domain. Further, some implementations include determining, based on a topic space associated with the social stream domain and a media domain, a topic that is associated with the microblog entry. Some implementations include determining, based on the topic space, one or more media items that are associated with the topic.Type: ApplicationFiled: February 25, 2013Publication date: August 28, 2014Applicant: MICROSOFT CORPORATIONInventors: Tao Mei, Shipeng Li, Suman Deb Roy, Wenjun Zeng
-
Patent number: 8804005Abstract: Visual concepts contained within a video clip are classified based upon a set of target concepts. The clip is segmented into shots and a multi-layer multi-instance (MLMI) structured metadata representation of each shot is constructed. A set of pre-generated trained models of the target concepts is validated using a set of training shots. An MLMI kernel is recursively generated which models the MLMI structured metadata representation of each shot by comparing prescribed pairs of shots. The MLMI kernel is subsequently utilized to generate a learned objective decision function which learns a classifier for determining if a particular shot (that is not in the set of training shots) contains instances of the target concepts. A regularization framework can also be utilized in conjunction with the MLMI kernel to generate modified learned objective decision functions. The regularization framework introduces explicit constraints which serve to maximize the precision of the classifier.Type: GrantFiled: April 29, 2008Date of Patent: August 12, 2014Assignee: Microsoft CorporationInventors: Tao Mei, Xian-Sheng Hua, Shipeng Li, Zhiwei Gu
-
Patent number: 8751472Abstract: A user behavior model provides personalized recommendations based in part on time and location, particularly to users of mobile devices. Entity types are ranked according to relevance to the user. Example entity types are restaurant, hotel, etc. The relevance may be based on reference to a large-scale database containing queries from other users. Additionally, entities within each entity type may be ranked based on relevance to the user and the time and location context. A user interface may display a ranked list of entity types, such as restaurant, hotel, etc., wherein each entity type is represented by a highest-ranked entity with the entity type. Thus, the user interface may display a highest-ranked restaurant, a highest-ranked hotel, etc. Upon user selection of one such entity type the user interface is replaced with a second user interface, for example showing a ranked hierarchy of restaurants, headed by the highest-ranked restaurant.Type: GrantFiled: May 19, 2011Date of Patent: June 10, 2014Assignee: Microsoft CorporationInventors: Tao Mei, Ying-Qing Xu, Shipeng Li, Jinfeng Zhuang, Bo Zhang, Peng Xu
-
Publication number: 20140075393Abstract: An image-based text extraction and searching system extracts an image be selected by gesture input by a user and the associated image data and proximate textual data in response to the image selection. Extracted image data and textual data can be utilized to perform or enhance a computerized search. The system can determine one or more database search terms based on the textual data and generate at least a first search query proposal related to the image data and the textual data.Type: ApplicationFiled: September 11, 2012Publication date: March 13, 2014Applicant: Microsoft CorporationInventors: Tao Mei, Jingdong Wang, Shipeng Li, Jian-Tao Sun, Zheng Chen, Shiyang Lu
-
Patent number: 8654255Abstract: Systems and methods for determining insertion points in a first video stream are described. The insertions points being configured for inserting at least one second video into the first video. In accordance with one embodiment, a method for determining the insertion points includes parsing the first video into a plurality of shots. The plurality of shots includes one or more shot boundaries. The method then determines one or more insertion points by balancing a discontinuity metric and an attractiveness metric of each shot boundary.Type: GrantFiled: September 20, 2007Date of Patent: February 18, 2014Assignee: Microsoft CorporationInventors: Xian-Sheng Hua, Tao Mei, Linjun Yang, Shipeng Li
-
Publication number: 20140003714Abstract: A user may perform an image search on an object shown in an image. The user may use a mobile device to display an image. In response to displaying the image, the client device may send the image to a visual search system for image segmentation. Upon receiving a segmented image from the visual search system, the client device may display the segmented image to the user who may select one or more segments including an object of interest to instantiate a search. The visual search system may formulate a search query based on the one or more selected segments and perform a search using the search query. The visual search system may then return search results to the client device for display to the user.Type: ApplicationFiled: September 5, 2013Publication date: January 2, 2014Applicant: Microsoft CorporationInventors: Tao Mei, Shipeng Li, Ying-Qing Xu, Ning Zhang, Zheng Chen, Jian-Tao Sun
-
Patent number: 8553981Abstract: A user may perform an image search on an object shown in an image. The user may use a mobile device to display an image. In response to displaying the image, the client device may send the image to a visual search system for image segmentation. Upon receiving a segmented image from the visual search system, the client device may display the segmented image to the user who may select one or more segments including an object of interest to instantiate a search. The visual search system may formulate a search query based on the one or more selected segments and perform a search using the search query. The visual search system may then return search results to the client device for display to the user.Type: GrantFiled: May 17, 2011Date of Patent: October 8, 2013Assignee: Microsoft CorporationInventors: Tao Mei, Ying-Qing Xu, Shipeng Li, Ning Zhang, Zheng Chen, Jian-Tao Sun
-
Patent number: 8504422Abstract: Techniques for recommending music and advertising to enhance a user's experience while photo browsing are described. In some instances, songs and ads are ranked for relevance to at least one photo from a photo album. The songs, ads and photo(s) from the photo album are then mapped to a style and mood ontology to obtain vector-based representations. The vector-based representations can include real valued terms, each term associated with a human condition defined by the ontology. A re-ranking process generates a relevancy term for each song and each ad indicating relevancy to the photo album. The relevancy terms can be calculated by summing weighted terms from the ranking and the mapping. Recommended music and ads may then be provided to a user, as the user browses a series of photos obtained from the photo album. The ads may be seamlessly embedded into the music in a nonintrusive manner.Type: GrantFiled: May 24, 2010Date of Patent: August 6, 2013Assignee: Microsoft CorporationInventors: Tao Mei, Xian-Sheng Hua, Shipeng Li, Jinlian Guo, Fei Sheng
-
Patent number: 8489589Abstract: An initial ranked list of a first plurality of visual documents is obtained from a first source in response to a query, and a second plurality of visual documents relevant to the query is gathered from a plurality of second sources. Visual patterns identified from the second plurality of visual documents are compared with the first visual documents for reranking the first visual documents.Type: GrantFiled: February 5, 2010Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Tao Mei, Xian-Sheng Hua, Shipeng Li, Yuan Liu
-
Patent number: 8452794Abstract: Techniques described herein enable better understanding of the intent of a user that submits a particular search query. These techniques receive a search request for images associated with a particular query. In response, the techniques determine images that are associated with the query, as well as other keywords that are associated with these images. The techniques then cluster, for each set of images associated with one of these keywords, the set of images into multiple groups. The techniques then rank the images and determine a representative image of each cluster. Finally, the tools suggest, to the user that submitted the query, to refine the search based on user selection of a keyword and a representative image. Thus, the techniques better understand the user's intent by allowing the user to refine the search based on another keyword and based on an image on which the user wishes to focus the search.Type: GrantFiled: February 11, 2009Date of Patent: May 28, 2013Assignee: Microsoft CorporationInventors: Linjun Yang, Meng Wang, Zhengjun Zha, Tao Mei, Xian-Sheng Hua
-
Patent number: 8369686Abstract: Video advertising overlay technique embodiments are presented that generally detect a set of spatio-temporal nonintrusive positions within a series of consecutive video frames in shots of a digital video and then overlay contextually relevant ads on these positions. In one general embodiment, this is accomplished by decomposing the video into a series of shots, and then identifying a video advertisement for each of a selected set of the shots. The identified video advertisement is one that is determined to be the most relevant to the content of the shot. An overlay area is also identified in each of the shots, where the selected overlay area is the least intrusive among a plurality of prescribed areas to a viewer of the video. The video advertisements identified for the shots are then respectively scheduled to be overlaid in the identified overlay area of a shot, whenever the shot is played.Type: GrantFiled: September 30, 2009Date of Patent: February 5, 2013Assignee: Microsoft CorporationInventors: Tao Mei, Xian-Sheng Hua, Shipeng Li, Jinlian Guo
-
Patent number: 8352321Abstract: Computer program products, devices, and methods for generating in-text embedded advertising are described. Embedded advertising is “hidden” or embedded into a message by matching an advertisement to the message and identifying a place in the message to insert the advertisement. For textual messages, statistical analysis of individual sentences is performed to determine where it would be most natural to insert an advertisement. Statistical rules of grammar derived from a language model may be used choose a natural and grammatical place in the sentence for inserting the advertisement. Insertion of the advertisement creates a modified sentence without degrading a meaning of the original sentence, yet also includes the advertisement as a part of a new sentence.Type: GrantFiled: December 12, 2008Date of Patent: January 8, 2013Assignee: Microsoft CorporationInventors: Tao Mei, Xian-Sheng Hua, Shipeng Li, Linjun Yang
-
Publication number: 20120294520Abstract: A user may perform an image search on an object shown in an image. The user may use a mobile device to display an image. In response to displaying the image, the client device may send the image to a visual search system for image segmentation. Upon receiving a segmented image from the visual search system, the client device may display the segmented image to the user who may select one or more segments including an object of interest to instantiate a search. The visual search system may formulate a search query based on the one or more selected segments and perform a search using the search query. The visual search system may then return search results to the client device for display to the user.Type: ApplicationFiled: May 17, 2011Publication date: November 22, 2012Applicant: Microsoft CorporationInventors: Tao Mei, Shipeng Li, Ying-Qing Xu, Ning Zhang, Zheng Chen, Jian-Tao Sun
-
Publication number: 20120295640Abstract: A user behavior model provides personalized recommendations based in part on time and location, particularly to users of mobile devices. Entity types are ranked according to relevance to the user. Example entity types are restaurant, hotel, etc. The relevance may be based on reference to a large-scale database containing queries from other users. Additionally, entities within each entity type may be ranked based on relevance to the user and the time and location context. A user interface may display a ranked list of entity types, such as restaurant, hotel, etc., wherein each entity type is represented by a highest-ranked entity with the entity type. Thus, the user interface may display a highest-ranked restaurant, a highest-ranked hotel, etc. Upon user selection of one such entity type the user interface is replaced with a second user interface, for example showing a ranked hierarchy of restaurants, headed by the highest-ranked restaurant.Type: ApplicationFiled: May 19, 2011Publication date: November 22, 2012Applicant: Microsoft CorporationInventors: Tao Mei, Ying-Qing Xu, Shipeng Li, Jinfeng Zhuang, Bo Zhang, Peng Xu