Patents by Inventor Siddharth Sharma

Siddharth Sharma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250061186
    Abstract: A method for data processing by a data clean room orchestration system is described. The method includes receiving an indication of mutually attested code for a data clean room between two or more partners. The method further includes configuring a trusted execution environment (TEE), including one or more virtual machines (VMs) that are individually or collectively operable to execute the mutually attested code. The method further includes transmitting, to endpoints associated with the partners, an attestation report including at least an encrypted token and a host public key of a host machine associated with the one or more VMs. The method further includes receiving respective partner secret keys wrapped with the host public key. The method further includes executing the mutually attested code on respective partner datasets in the TEE based on using a host private key of the host machine to unwrap the respective partner secret keys.
    Type: Application
    Filed: August 15, 2023
    Publication date: February 20, 2025
    Inventors: Siddharth Sharma, Roopak Gupta, Chetan Urkudkar, Prashanth Jonnalagadda
  • Publication number: 20240311505
    Abstract: Methods, systems, and devices for data processing are described. A process orchestration layer of a data processing system may obtain an indication of code that has been approved by two or more parties of a secured sharing session. The process orchestration layer may generate a stored procedure that includes an initialization function, an output function, and a run function with the approved code. The process orchestration layer may output, to a first sub-system associated with a first party of the secured sharing session, a request that causes the first sub-system to execute the stored procedure. The process orchestration layer may receive an indication of an encrypted session token from the first sub-system in accordance with the initialization function of the stored procedure. The process orchestration layer may validate the encrypted session token and provide the validated session token to other parties of the secured sharing session.
    Type: Application
    Filed: March 12, 2024
    Publication date: September 19, 2024
    Inventors: Anil Raju Puliyeril, Roopak Gupta, Matthew Karasick, Siddharth Sharma
  • Publication number: 20240291836
    Abstract: Methods, systems, and devices for data management are described. A system supporting malware detection may obtain event data such as risk scores corresponding to events associated with a set of computing entities. Using the event data, the system may construct a graph that includes nodes that represent the set of computing entities, and edges that represent the events, where the edges are between initiator and affected nodes and are associated with the respective event risk scores. Using the graph, respective node risk scores may be calculated for at least some nodes of the graph, and one or more anomalous nodes may be identified based on the one or more anomalous nodes having respective node risk scores that satisfy a threshold. The system may then output an indication of one or more computing entities corresponding to the one or more anomalous nodes.
    Type: Application
    Filed: February 24, 2023
    Publication date: August 29, 2024
    Inventors: Rohit Agrawal, Mudit Malpani, Anshul Gupta, Gaurav Maheshwari, Siddharth Sharma, Tyler Vu
  • Patent number: 12045924
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: July 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Publication number: 20240062003
    Abstract: Various embodiments of the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like for performing natural language processing operations by generating semantic table representations for table data objects using a token-wise entity type classification mechanism whose output space is defined by a set of defined entity types characterized by an inter-related entity type taxonomy to generate a representation of a table data object that describes per-token semantic inferences and cross-token semantic inferences performed on the table data object in accordance with subject-matter-domain insights as described by the inter-related entity type taxonomy.
    Type: Application
    Filed: August 22, 2022
    Publication date: February 22, 2024
    Inventors: Mrityunjai Singh, Aviral Sharma, Jatin Lamba, Shreyansh S. Nanawati, Deeksha Thareja, Siddharth Sharma
  • Patent number: 11886912
    Abstract: Data processing approaches are disclosed that include receiving a configuration indicating a plurality of parameters for performing a data processing job, identifying available compute resources from a plurality of public cloud infrastructures, where each public cloud infrastructure of the plurality of public cloud infrastructures supports one or more computing applications, one or more job schedulers, and one or more utilization rates, selecting one or more compute clusters from one or more of the plurality of public cloud infrastructures based on a matching process between the parameters for performing the data processing job and a combination of the one or more computing applications, the one or more job schedulers, and the one or more utilization rates, and initiating the one or more compute clusters for processing the data processing job based on the selecting.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: January 30, 2024
    Assignee: Salesforce Inc.
    Inventors: Amit Martu Kamat, Siddharth Sharma, Raveendrnathan Loganathan, Anil Raju Puliyeril, Kenneth Siu
  • Publication number: 20230297485
    Abstract: Various embodiments include a system for generating performance monitoring data in a computing system. The system includes a unit level counter with a set of counters, where each counter increments during each clock cycle in which a corresponding electronic signal is at a first state, such as a high or low logic level state. Periodically, the unit level counter transmits the counter values to a corresponding counter collection unit. The counter collection unit includes a set of counters that aggregates the values of the counters in multiple unit level counters. Based on certain trigger conditions, the counter collection unit transmits records to a reduction channel. The reduction channel includes a set of counters that aggregates the values of the counters in multiple counter collection units. Each virtual machine executing on the system can access a different corresponding reduction channel, providing secure performance metric data for each virtual machine.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 21, 2023
    Inventors: Pranav VAIDYA, Alan MENEZES, Siddharth SHARMA, Jin OUYANG, Gregory Paul SMITH, Timothy J. MCDONALD, Shounak KAMALAPURKAR, Abhijat RANADE, Thomas Melvin OGLETREE
  • Publication number: 20230007920
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Application
    Filed: September 15, 2022
    Publication date: January 12, 2023
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Publication number: 20220398291
    Abstract: Methods, systems, apparatuses, and computer-readable storage mediums described herein are directed to techniques for smart browser history searching. For example, a user may submit natural language-based search queries to a browser application, which searches for various textual features of web pages maintained by a browser's history, as well as various entity object types included on such web pages based on the search queries. The entity object types include various content included on the web pages, including, but not limited to, products, images, and videos. The browser application also searches for textual features and/or entity object types having a semantic similarity to the search terms of the search queries, thereby providing an advanced search that not only aims to locate web pages based on exact keywords, but also based on the intent and contextual significance of the search terms specified by the user.
    Type: Application
    Filed: November 18, 2021
    Publication date: December 15, 2022
    Inventors: Tulasi MENON, Laalithya BODDAPATI, Parinishtha YADAV, Prasenjit MUKHERJEE, Siddharth SHARMA
  • Patent number: 11481950
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: October 25, 2022
    Assignee: NVIDIA Corporation
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Publication number: 20220121488
    Abstract: Data processing approaches are disclosed that include receiving a configuration indicating a plurality of parameters for performing a data processing job, identifying available compute resources from a plurality of public cloud infrastructures, where each public cloud infrastructure of the plurality of public cloud infrastructures supports one or more computing applications, one or more job schedulers, and one or more utilization rates, selecting one or more compute clusters from one or more of the plurality of public cloud infrastructures based on a matching process between the parameters for performing the data processing job and a combination of the one or more computing applications, the one or more job schedulers, and the one or more utilization rates, and initiating the one or more compute clusters for processing the data processing job based on the selecting.
    Type: Application
    Filed: January 29, 2021
    Publication date: April 21, 2022
    Inventors: Amit Martu Kamat, Siddharth Sharma, Raveendrnathan Loganathan, Anil Raju Puliyeril, Kenneth Siu
  • Publication number: 20210174569
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Application
    Filed: January 29, 2021
    Publication date: June 10, 2021
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Patent number: 10909738
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: February 2, 2021
    Assignee: NVIDIA Corporation
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Patent number: 10809986
    Abstract: An system and a method for the optimization of dynamic code translation is disclosed. A cloud-based front-end application receives an input module, rule specification and a translation definition. The cloud-based front-end application transmits the input module, rule specification and translation definition to a back-end processing module. The back-end processing module parses the three inputs and stores them in separate data structures. The back-end processing module performs a non-executing analysis of the translation definition based on the rule specification, generating a set of defects. The back-end processing module performs an execution of the translation definition with the input module, generating a report of system metrics. The set of defects and the system metrics are transmitted back to a GUI running on the cloud-based front-end application.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: October 20, 2020
    Assignee: Walmart Apollo, LLC
    Inventors: Madhavan Kalkunte Ramachandra, Siddharth Sharma
  • Publication number: 20190317745
    Abstract: An system and a method for the optimization of dynamic code translation is disclosed. A cloud-based front-end application receives an input module, rule specification and a translation definition. The cloud-based front-end application transmits the input module, rule specification and translation definition to a back-end processing module. The back-end processing module parses the three inputs and stores them in separate data structures. The back-end processing module performs a non-executing analysis of the translation definition based on the rule specification, generating a set of defects. The back-end processing module performs an execution of the translation definition with the input module, generating a report of system metrics. The set of defects and the system metrics are transmitted back to a GUI running on the cloud-based front-end application.
    Type: Application
    Filed: June 4, 2018
    Publication date: October 17, 2019
    Inventors: Kr Madhavan, Siddharth Sharma
  • Publication number: 20190213775
    Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.
    Type: Application
    Filed: January 5, 2018
    Publication date: July 11, 2019
    Inventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
  • Patent number: 10339039
    Abstract: A virtualization request is identified to initiate a virtualized transaction involving a first software component and a virtual service simulating a second software component. A reference within the first software component to the second software component is determined, using a plugin installed on the first software component, that is to be used by the first software component to determine a first network location of the second software component. A second network location of a system to host the virtual service is determined and the reference is changed, using the plugin, to direct communications of the first software component to the second network location instead of the first network location responsive to the virtualization request.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: July 2, 2019
    Assignee: CA, Inc.
    Inventors: Rajesh M. Raheja, Dhruv Mevada, Siddharth Sharma, Stephy Nancy Francis Xavier
  • Publication number: 20180210745
    Abstract: A virtualization request is identified to initiate a virtualized transaction involving a first software component and a virtual service simulating a second software component. A reference within the first software component to the second software component is determined, using a plugin installed on the first software component, that is to be used by the first software component to determine a first network location of the second software component. A second network location of a system to host the virtual service is determined and the reference is changed, using the plugin, to direct communications of the first software component to the second network location instead of the first network location responsive to the virtualization request.
    Type: Application
    Filed: January 25, 2017
    Publication date: July 26, 2018
    Inventors: Rajesh M. Raheja, Dhruv Mevada, Siddharth Sharma, Stephy Nancy Francis Xavier
  • Publication number: 20130226944
    Abstract: Data transformation can be performed across various data structures and formats. Moreover, data transformation can be format agnostic. Output data of a second structure can be generated as a function of input data of a first structure and a transform independent of the format of input and output data. In one instance, the transform can be specified by way of a graphical representation and encoded in a form independent of input and output data formats. Subsequently, data transformation can be performed as a function of the transform and input data.
    Type: Application
    Filed: February 24, 2012
    Publication date: August 29, 2013
    Applicant: Microsoft Corporation
    Inventors: Sushil Baid, Kranthi K. Mannem, Palavalli R. Sharath, Anil K. Prasad, Siddharth Sharma, Krishnan Srinivasan