Patents by Inventor Anitha Kannan

Anitha Kannan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130275441
    Abstract: A framework is provided for composing texts about objects with structured information about these objects, and thus disclosed are methodologies for linking information from at least two data sources—one comprising a plurality of documents comprising text pertaining to at least one object, and one comprising a plurality of structured records comprising at least one characteristic of the at least one object, each characteristic comprising one property name and an associated property value corresponding to the property name for the at least one object—by determining one or more instance-based traits for each object in both data sources and associating at least one record with at least one document that refers to each object, each trait comprising one or more characteristics that identifiably distinguish each object from all other objects.
    Type: Application
    Filed: July 30, 2012
    Publication date: October 17, 2013
    Applicant: Microsoft Corporation
    Inventors: Rakesh Agrawal, Anitha Kannan, John C. Shafer, Ariel Fuxman
  • Publication number: 20130204608
    Abstract: An image in a web page may be annotated after deriving information about an image when the image may be displayed on multiple web pages. The web pages that show the image may be analyzed in light of each other to determine metadata about the image, then various additional content may be added to the image. The additional content may be hyperlinks to other webpages. The additional content may be displayed as annotations on top of the images and in other manners. Many embodiments may perform searching, analysis, and classification of images prior to the web page being served.
    Type: Application
    Filed: February 6, 2012
    Publication date: August 8, 2013
    Applicant: Microsoft Corporation
    Inventors: Simon John BAKER, Juliet Anne BERNSTEIN, Krishnan RAMNATH, Anitha KANNAN, Dahua LIN, Qifa KE, Matthew UYTTENDAELE
  • Patent number: 8503769
    Abstract: Text in web pages or other text documents may be classified based on the images or other objects within the webpage. A system for identifying and classifying text related to an object may identify one or more web pages containing the image or similar images, determine topics from the text of the document, and develop a set of training phrases for a classifier. The classifier may be trained and then used to analyze the text in the documents. The training set may include both positive examples and negative examples of text taken from the set of documents. A positive example may include captions or other elements directly associated with the object, while negative examples may include text taken from the documents, but from a large distance from the object. In some cases, the system may iterate on the classification process to refine the results.
    Type: Grant
    Filed: December 28, 2010
    Date of Patent: August 6, 2013
    Assignee: Microsoft Corporation
    Inventors: Simon Baker, Dahua Lin, Anitha Kannan, Qifa Ke
  • Publication number: 20130144854
    Abstract: In one embodiment, a web service engine server 104 may predict a successive action by a user based on an entity reference 302. The web service engine server 104 may identify an entity reference 302 in a data transmission caused by a user. The web service engine server 104 may determine from the data transmission a user intention towards the entity reference 302 using an intention model based on a transmission log. The web service engine server 104 may predict a related successive web action option 522 for the entity reference 302 based on the user intention.
    Type: Application
    Filed: December 6, 2011
    Publication date: June 6, 2013
    Applicant: Microsoft Corporation
    Inventors: Patrick Pantel, Michael Gamon, Anitha Kannan, Ariel Fuxman, Thomas Lin
  • Patent number: 8423568
    Abstract: Described is a technology for automatically generating labeled training data for training a classifier based upon implicit information associated with the data. For example, whether a query has commercial intent can be classified based upon whether the query was submitted at a commercial website's search portal, as logged in a toolbar log. Positive candidate query-related data is extracted from the toolbar log based upon the associated implicit information. A click log is processed to obtain negative query-related data. The labeled training data is automatically generated by separating at least some of the positive candidate query data from the remaining positive candidate query data based upon the negative query data. The labeled training data may be used to train a classifier, such as to classify an online search query as having a certain type of intent or not.
    Type: Grant
    Filed: September 16, 2009
    Date of Patent: April 16, 2013
    Assignee: Microsoft Corporation
    Inventors: Ariel D. Fuxman, Anitha Kannan, Andrew Brian Goldberg, Rakesh Agrawal
  • Patent number: 8417651
    Abstract: A method and apparatus for electronically matching an electronic offer to structured data for a product offering is disclosed. The structure data is reviewed and a dictionary of terms for each attribute from the structure data is created. Attributes in unstructured text may be determined. Each pair of the attributes (name and value) from the unstructured data and the structured data are obtained, the attribute pairs of the structured data and the unstructured data and compared and a similarity level is calculated for the matching the attribute pairs. The structured data pair that has the highest similarity score to the unstructured data pair is selected and returned.
    Type: Grant
    Filed: May 20, 2010
    Date of Patent: April 9, 2013
    Assignee: Microsoft Corporation
    Inventors: Anitha Kannan, Inmar-Ella Givoni
  • Publication number: 20120314941
    Abstract: Product images are used in conjunction with textual descriptions to improve classifications of product offerings. By combining cues from both text and image descriptions associated with products, implementations enhance both the precision and recall of product description classifications within the context of web-based commerce search. Several implementations are directed to improving those areas where text-only approaches are most unreliable. For example, several implementations use image signals to complement text classifiers and improve overall product classification in situations where brief textual product descriptions use vocabulary that overlaps with multiple diverse categories. Other implementations are directed to using text and images “training sets” to improve automated classifiers including text-only classifiers.
    Type: Application
    Filed: June 13, 2011
    Publication date: December 13, 2012
    Applicant: Microsoft Corporation
    Inventors: Anitha Kannan, Partha Pratim Talukdar, Nikhil Rasiwasia, Qifa Ke, Rakesh Agrawal
  • Publication number: 20120163707
    Abstract: Text in web pages or other text documents may be classified based on the images or other objects within the webpage. A system for identifying and classifying text related to an object may identify one or more web pages containing the image or similar images, determine topics from the text of the document, and develop a set of training phrases for a classifier. The classifier may be trained and then used to analyze the text in the documents. The training set may include both positive examples and negative examples of text taken from the set of documents. A positive example may include captions or other elements directly associated with the object, while negative examples may include text taken from the documents, but from a large distance from the object. In some cases, the system may iterate on the classification process to refine the results.
    Type: Application
    Filed: December 28, 2010
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Simon BAKER, Dahua LIN, Anitha Kannan, Qifa Ke
  • Publication number: 20110289026
    Abstract: A method and apparatus for electronically matching an electronic offer to structured data for a product offering is disclosed. The structure data is reviewed and a dictionary of terms for each attribute from the structure data is created. Attributes in unstructured text may be determined. Each pair of the attributes (name and value) from the unstructured data and the structured data are obtained, the attribute pairs of the structured data and the unstructured data and compared and a similarity level is calculated for the matching the attribute pairs. The structured data pair that has the highest similarity score to the unstructured data pair is selected and returned.
    Type: Application
    Filed: May 20, 2010
    Publication date: November 24, 2011
    Applicant: Microsoft Corporation
    Inventors: Anitha Kannan, Inmar-Ella Givoni
  • Publication number: 20110066650
    Abstract: Described is a technology for automatically generating labeled training data for training a classifier based upon implicit information associated with the data. For example, whether a query has commercial intent can be classified based upon whether the query was submitted at a commercial website's search portal, as logged in a toolbar log. Positive candidate query-related data is extracted from the toolbar log based upon the associated implicit information. A click log is processed to obtain negative query-related data. The labeled training data is automatically generated by separating at least some of the positive candidate query data from the remaining positive candidate query data based upon the negative query data. The labeled training data may be used to train a classifier, such as to classify an online search query as having a certain type of intent or not.
    Type: Application
    Filed: September 16, 2009
    Publication date: March 17, 2011
    Applicant: Microsoft Corporation
    Inventors: Ariel D. Fuxman, Anitha Kannan, Andrew Brian Goldberg, Rakesh Agrawal
  • Publication number: 20100318539
    Abstract: Described is a technology for obtaining labeled sample data. Labeling guidelines are converted into binary yes/no questions regarding data samples. The questions and data samples are provided to judges who then answer the questions for each sample. The answers are input to a label assignment algorithm that associates a label with each sample based upon the answers. If the guidelines are modified and previous answers to the binary questions are maintained, at least some of the previous answers may be used in re-labeling the samples in view of the modification.
    Type: Application
    Filed: June 15, 2009
    Publication date: December 16, 2010
    Applicant: Microsoft Corporation
    Inventors: Anitha Kannan, Krishnaram Kenthapadi, John C. Shafer, Ariel Fuxman
  • Patent number: 7729531
    Abstract: Many problems in the fields of image processing and computer vision relate to creating good representations of information in images of objects in scenes. We provide a system for learning repeated-structure elements from one or more input images. The repeated-structure elements are patches that may be single pixels or coherent groups of pixels of varying shape, size and appearance (where those shapes and sizes are not pre-specified). Input images are mapped to a single output image using offset maps to specify the mapping. A joint probability distribution on the offset maps, output image and input images is specified and an unsupervised learning process is used to learn the offset maps and output image. The learnt output image comprises repeated-structure elements. This shape and appearance information captured in the learnt repeated-structure elements may be used for object recognition and many other tasks.
    Type: Grant
    Filed: September 19, 2006
    Date of Patent: June 1, 2010
    Assignee: Microsoft Corporation
    Inventors: John Winn, Anitha Kannan, Carsten Rother
  • Publication number: 20080069438
    Abstract: Many problems in the fields of image processing and computer vision relate to creating good representations of information in images of objects in scenes. We provide a system for learning repeated-structure elements from one or more input images. The repeated-structure elements are patches that may be single pixels or coherent groups of pixels of varying shape, size and appearance (where those shapes and sizes are not pre-specified). Input images are mapped to a single output image using offset maps to specify the mapping. A joint probability distribution on the offset maps, output image and input images is specified and an unsupervised learning process is used to learn the offset maps and output image. The learnt output image comprises repeated-structure elements. This shape and appearance information captured in the learnt repeated-structure elements may be used for object recognition and many other tasks.
    Type: Application
    Filed: September 19, 2006
    Publication date: March 20, 2008
    Applicant: Microsoft Corporation
    Inventors: John Winn, Anitha Kannan, Carsten Rother