Patents by Inventor Siddharth Sharma

Siddharth Sharma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928558
    Abstract: A request is received associated with a review. Within first content, a first field of interest and a second field of interest are identified and within second content, a third field of interest and a fourth field of interest are identified. A review is generated that includes a first indication of the first field of interest and a second indication of the second field of interest within the first content, as well as a third indication of the third field of interest and a fourth indication of the fourth field of interest within the second content. The review is transmitted to a device of a reviewer for reviewing the content.
    Type: Grant
    Filed: November 29, 2019
    Date of Patent: March 12, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Siddharth Vivek Joshi, Anuj Gupta, Mark Chien, Jonathan Thomas Greenlee, Stefano Stefani, Warren Barkley, Jon I. Turow, Sindhu Chejerla, Kriti Bharti, Prateek Sharma
  • Publication number: 20240062003
    Abstract: Various embodiments of the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like for performing natural language processing operations by generating semantic table representations for table data objects using a token-wise entity type classification mechanism whose output space is defined by a set of defined entity types characterized by an inter-related entity type taxonomy to generate a representation of a table data object that describes per-token semantic inferences and cross-token semantic inferences performed on the table data object in accordance with subject-matter-domain insights as described by the inter-related entity type taxonomy.
    Type: Application
    Filed: August 22, 2022
    Publication date: February 22, 2024
    Inventors: Mrityunjai Singh, Aviral Sharma, Jatin Lamba, Shreyansh S. Nanawati, Deeksha Thareja, Siddharth Sharma
  • Patent number: 11886912
    Abstract: Data processing approaches are disclosed that include receiving a configuration indicating a plurality of parameters for performing a data processing job, identifying available compute resources from a plurality of public cloud infrastructures, where each public cloud infrastructure of the plurality of public cloud infrastructures supports one or more computing applications, one or more job schedulers, and one or more utilization rates, selecting one or more compute clusters from one or more of the plurality of public cloud infrastructures based on a matching process between the parameters for performing the data processing job and a combination of the one or more computing applications, the one or more job schedulers, and the one or more utilization rates, and initiating the one or more compute clusters for processing the data processing job based on the selecting.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: January 30, 2024
    Assignee: Salesforce Inc.
    Inventors: Amit Martu Kamat, Siddharth Sharma, Raveendrnathan Loganathan, Anil Raju Puliyeril, Kenneth Siu
  • Publication number: 20230297485
    Abstract: Various embodiments include a system for generating performance monitoring data in a computing system. The system includes a unit level counter with a set of counters, where each counter increments during each clock cycle in which a corresponding electronic signal is at a first state, such as a high or low logic level state. Periodically, the unit level counter transmits the counter values to a corresponding counter collection unit. The counter collection unit includes a set of counters that aggregates the values of the counters in multiple unit level counters. Based on certain trigger conditions, the counter collection unit transmits records to a reduction channel. The reduction channel includes a set of counters that aggregates the values of the counters in multiple counter collection units. Each virtual machine executing on the system can access a different corresponding reduction channel, providing secure performance metric data for each virtual machine.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 21, 2023
    Inventors: Pranav VAIDYA, Alan MENEZES, Siddharth SHARMA, Jin OUYANG, Gregory Paul SMITH, Timothy J. MCDONALD, Shounak KAMALAPURKAR, Abhijat RANADE, Thomas Melvin OGLETREE
  • Publication number: 20230007920
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Application
    Filed: September 15, 2022
    Publication date: January 12, 2023
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Publication number: 20220398291
    Abstract: Methods, systems, apparatuses, and computer-readable storage mediums described herein are directed to techniques for smart browser history searching. For example, a user may submit natural language-based search queries to a browser application, which searches for various textual features of web pages maintained by a browser's history, as well as various entity object types included on such web pages based on the search queries. The entity object types include various content included on the web pages, including, but not limited to, products, images, and videos. The browser application also searches for textual features and/or entity object types having a semantic similarity to the search terms of the search queries, thereby providing an advanced search that not only aims to locate web pages based on exact keywords, but also based on the intent and contextual significance of the search terms specified by the user.
    Type: Application
    Filed: November 18, 2021
    Publication date: December 15, 2022
    Inventors: Tulasi MENON, Laalithya BODDAPATI, Parinishtha YADAV, Prasenjit MUKHERJEE, Siddharth SHARMA
  • Patent number: 11481950
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: October 25, 2022
    Assignee: NVIDIA Corporation
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Publication number: 20220121488
    Abstract: Data processing approaches are disclosed that include receiving a configuration indicating a plurality of parameters for performing a data processing job, identifying available compute resources from a plurality of public cloud infrastructures, where each public cloud infrastructure of the plurality of public cloud infrastructures supports one or more computing applications, one or more job schedulers, and one or more utilization rates, selecting one or more compute clusters from one or more of the plurality of public cloud infrastructures based on a matching process between the parameters for performing the data processing job and a combination of the one or more computing applications, the one or more job schedulers, and the one or more utilization rates, and initiating the one or more compute clusters for processing the data processing job based on the selecting.
    Type: Application
    Filed: January 29, 2021
    Publication date: April 21, 2022
    Inventors: Amit Martu Kamat, Siddharth Sharma, Raveendrnathan Loganathan, Anil Raju Puliyeril, Kenneth Siu
  • Publication number: 20210174569
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Application
    Filed: January 29, 2021
    Publication date: June 10, 2021
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Patent number: 10909738
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: February 2, 2021
    Assignee: NVIDIA Corporation
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Patent number: 10809986
    Abstract: An system and a method for the optimization of dynamic code translation is disclosed. A cloud-based front-end application receives an input module, rule specification and a translation definition. The cloud-based front-end application transmits the input module, rule specification and translation definition to a back-end processing module. The back-end processing module parses the three inputs and stores them in separate data structures. The back-end processing module performs a non-executing analysis of the translation definition based on the rule specification, generating a set of defects. The back-end processing module performs an execution of the translation definition with the input module, generating a report of system metrics. The set of defects and the system metrics are transmitted back to a GUI running on the cloud-based front-end application.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: October 20, 2020
    Assignee: Walmart Apollo, LLC
    Inventors: Madhavan Kalkunte Ramachandra, Siddharth Sharma
  • Publication number: 20190317745
    Abstract: An system and a method for the optimization of dynamic code translation is disclosed. A cloud-based front-end application receives an input module, rule specification and a translation definition. The cloud-based front-end application transmits the input module, rule specification and translation definition to a back-end processing module. The back-end processing module parses the three inputs and stores them in separate data structures. The back-end processing module performs a non-executing analysis of the translation definition based on the rule specification, generating a set of defects. The back-end processing module performs an execution of the translation definition with the input module, generating a report of system metrics. The set of defects and the system metrics are transmitted back to a GUI running on the cloud-based front-end application.
    Type: Application
    Filed: June 4, 2018
    Publication date: October 17, 2019
    Inventors: Kr Madhavan, Siddharth Sharma
  • Publication number: 20190213775
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Application
    Filed: January 5, 2018
    Publication date: July 11, 2019
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Patent number: 10339039
    Abstract: A virtualization request is identified to initiate a virtualized transaction involving a first software component and a virtual service simulating a second software component. A reference within the first software component to the second software component is determined, using a plugin installed on the first software component, that is to be used by the first software component to determine a first network location of the second software component. A second network location of a system to host the virtual service is determined and the reference is changed, using the plugin, to direct communications of the first software component to the second network location instead of the first network location responsive to the virtualization request.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: July 2, 2019
    Assignee: CA, Inc.
    Inventors: Rajesh M. Raheja, Dhruv Mevada, Siddharth Sharma, Stephy Nancy Francis Xavier
  • Publication number: 20180210745
    Abstract: A virtualization request is identified to initiate a virtualized transaction involving a first software component and a virtual service simulating a second software component. A reference within the first software component to the second software component is determined, using a plugin installed on the first software component, that is to be used by the first software component to determine a first network location of the second software component. A second network location of a system to host the virtual service is determined and the reference is changed, using the plugin, to direct communications of the first software component to the second network location instead of the first network location responsive to the virtualization request.
    Type: Application
    Filed: January 25, 2017
    Publication date: July 26, 2018
    Inventors: Rajesh M. Raheja, Dhruv Mevada, Siddharth Sharma, Stephy Nancy Francis Xavier
  • Publication number: 20130226944
    Abstract: Data transformation can be performed across various data structures and formats. Moreover, data transformation can be format agnostic. Output data of a second structure can be generated as a function of input data of a first structure and a transform independent of the format of input and output data. In one instance, the transform can be specified by way of a graphical representation and encoded in a form independent of input and output data formats. Subsequently, data transformation can be performed as a function of the transform and input data.
    Type: Application
    Filed: February 24, 2012
    Publication date: August 29, 2013
    Applicant: Microsoft Corporation
    Inventors: Sushil Baid, Kranthi K. Mannem, Palavalli R. Sharath, Anil K. Prasad, Siddharth Sharma, Krishnan Srinivasan