Patents by Inventor Siva Kalyana Pavan Kumar Mallapragada Naga Surya

Siva Kalyana Pavan Kumar Mallapragada Naga Surya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220398451
    Abstract: Systems and methods are provided for training and using a deep neural network with adaptively trained off-ramps for an early exit at an intermediate representation layer. The training includes, for respective intermediate representation layers of a sequence of intermediate representation layers, predicting a label based on the training data and comparing against a correct label. The training further includes generating a confidence value associated with the predicted label. The confidence value is based on optimizing an objective function that includes a weighted entropy of a probability distribution of the likelihood, weighted based on whether previous intermediate representation layer has accurately predicted the label. Use of the weighted entropy provides the training with a focus on predicting labels that the previous intermediate representation layers has performed poorly and not labels that have existed before the intermediate representation layer being trained.
    Type: Application
    Filed: June 15, 2021
    Publication date: December 15, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Siva Kalyana Pavan Kumar Mallapragada Naga SURYA, Joseph John PFEIFFER, III, Davis Leland GILTON
  • Publication number: 20200285681
    Abstract: Systems and methods for the generation of a sequence of related text-based search queries.
    Type: Application
    Filed: May 26, 2020
    Publication date: September 10, 2020
    Inventors: Siva Kalyana Pavan Kumar Mallapragada Naga Surya, Carlos Barcelo
  • Patent number: 10664534
    Abstract: A source product and a competitor product are automatically matched by computing similarity scores between the products. An attribute of a source product is extracted from information received from a user. The extracted attribute is then used to generate a search term. A search for competitor product information is performed using the search term and a similarity score between the source product and the competitor product is calculated. In one implementation, the similarity score is calculated by generating a first value pair for the source product and the competitor product, and for the first value pair, (1) assigning a value to a degree of similarity between the source product and the competitor product with respect to a common feature, and (2) finding the product of the assigned value and a weight value assigned to the common feature.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 26, 2020
    Assignee: Home Depot Product Authority, LLC
    Inventors: Siva Kalyana Pavan Kumar Mallapragada Naga Surya, Carlos Barcelo
  • Patent number: 9223871
    Abstract: Wrappers are induced for multiple domains where, for a given target string having relatively universal distribution across domains of interest, a first wrapper may be defined and trained for a particular domain. Target strings extracted from that domain may be used to search for documents in other domains. New wrappers may be learned for other domains also containing the target strings. Further, a first wrapper may be learned for a given domain using a limited amount of training data from that single domain. The first wrapper is then applied to all pages in the domain to extract the relevant information. A few of the new words extracted are then searched against the document collection to obtain a list of domains that contain the extracted words. The updated information may be used as training data to learn new wrappers on those domains.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: December 29, 2015
    Assignee: Homer TLC, Inc.
    Inventor: Siva Kalyana Pavan Kumar Mallapragada Naga Surya
  • Publication number: 20140136494
    Abstract: Information from a plurality of domains is automatically extracted according to an iterative application of rules. A first rule is generated based on a target string. The first rule comprises at least one filter. A domain of interest is identified and a training set is generated using the target string and at least one document in the domain of interest. The first rule is applied to each document in the training set to obtain a first of target results. The first set of target results are compared to desired set of target results. Based on the comparison, a second rule may is created and applied to the training set to yield an improved second set of target results.
    Type: Application
    Filed: March 15, 2013
    Publication date: May 15, 2014
    Applicant: Homer TLC, Inc.
    Inventor: Siva Kalyana Pavan Kumar Mallapragada Naga Surya
  • Publication number: 20140136549
    Abstract: A source product and a competitor product are automatically matched by computing similarity scores between the products. An attribute of a source product is extracted from information received from a user. The extracted attribute is then used to generate a search term. A search for competitor product information is performed using the search term and a similarity score between the source product and the competitor product is calculated. In one implementation, the similarity score is calculated by generating a first value pair for the source product and the competitor product, and for the first value pair, (1) assigning a value to a degree of similarity between the source product and the competitor product with respect to a common feature, and (2) finding the product of the assigned value and a weight value assigned to the common feature.
    Type: Application
    Filed: March 15, 2013
    Publication date: May 15, 2014
    Applicant: HOMER TLC, INC.
    Inventors: Siva Kalyana Pavan Kumar Mallapragada Naga Surya, Carlos Barcelo
  • Publication number: 20140136568
    Abstract: Wrappers are induced for multiple domains where, for a given target string having relatively universal distribution across domains of interest, a first wrapper may be defined and trained for a particular domain. Target strings extracted from that domain may be used to search for documents in other domains. New wrappers may be learned for other domains also containing the target strings. Further, a first wrapper may be learned for a given domain using a limited amount of training data from that single domain. The first wrapper is then applied to all pages in the domain to extract the relevant information. A few of the new words extracted are then searched against the document collection to obtain a list of domains that contain the extracted words. The updated information may be used as training data to learn new wrappers on those domains.
    Type: Application
    Filed: March 15, 2013
    Publication date: May 15, 2014
    Applicant: Homer TLC, Inc.
    Inventor: Siva Kalyana Pavan Kumar Mallapragada Naga Surya