Patents by Inventor Saurabh Shrivastava

Saurabh Shrivastava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250071104
    Abstract: A system and a method for a single sign-on (SSO) authentication process that enables digital communication between multiple entities. The system includes a server with one or more processors. These processors receive a first authentication request from a first entity to access a second entity, which employs a first authentication process. Based on the request, the processors generate a second authentication request corresponding to the second authentication process used by the second entity. The second authentication request is then communicated to the second entity. After receiving a first response from the second entity, the processors generate a second response corresponding to that first response. This second response is communicated back to the first entity. Upon validating the second response, the processors allow communication between the first and second entities.
    Type: Application
    Filed: August 24, 2023
    Publication date: February 27, 2025
    Applicant: Lumenore Inc.
    Inventors: Abhishek Kumar, Saurabh Mishra, Rahul Soni, Anurag Shrivastava
  • Patent number: 12224941
    Abstract: Embodiments of the present invention relate to a centralized network analytic device, the centralized network analytic device efficiently uses on-chip memory to flexibly perform counting, traffic rate monitoring and flow sampling. The device includes a pool of memory that is shared by all cores and packet processing stages of each core. The counting, the monitoring and the sampling are all defined through software allowing for greater flexibility and efficient analytics in the device. In some embodiments, the device is a network switch.
    Type: Grant
    Filed: March 3, 2023
    Date of Patent: February 11, 2025
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Publication number: 20250045288
    Abstract: A system for data manipulation and management, the system comprising a data integration engine, a data acquisition module, a data transformation module, a data output module and a spark engine. The data integration engine is configured to receive a job specification, wherein the job specification comprises a set of instructions. The data acquisition module is configured to acquire a set of data from a database. The data transformation module is configured to transform the set of data based on the set of instructions defined in the job specification. The data output module is configured to store a transformed set of data to the database. The spark engine is configured to receive instructions from the data acquisition module, the data transformation module, and the data output module, wherein the data acquisition module, the data transformation module, and the data output module are configured to receive instructions from the data integration engine.
    Type: Application
    Filed: August 2, 2023
    Publication date: February 6, 2025
    Applicant: Lumenore Inc.
    Inventors: Saurabh Mishra, Rahul Soni, Anurag Shrivastava
  • Patent number: 12204455
    Abstract: A method includes synthesizing a hardware description language (HDL) code into a netlist comprising a first a second and a third components. The method further includes allocating addresses to each component of the netlist. Each allocated address includes assigned addresses and unassigned addresses. An internal address space for a chip is formed based on the allocated addresses. The internal address space includes assigned addresses followed by unassigned addresses for the first component concatenated to the assigned addresses followed by unassigned addresses for the second component concatenated to the assigned addresses followed by unassigned addresses for the third component. An external address space for components outside of the chip is generated that includes only the assigned addresses of the first component concatenated to the assigned addresses of the second component concatenated to the assigned addresses of the third component. Internal addresses are translated to external addresses and vice versa.
    Type: Grant
    Filed: February 22, 2023
    Date of Patent: January 21, 2025
    Assignee: Marvell Asia Pte Ltd
    Inventors: Saurabh Shrivastava, Shrikant Sundaram, Guy T. Hutchison
  • Patent number: 11829492
    Abstract: A new approach is proposed to support hardware-based protection for registers of an electronic device. Sources requesting access to the registers are categorized into a set of internal sources that can be trusted and a set of external sources that are untrusted. The registers are classified into a set of internal registers allowed to be accessed by the internal resources only, a set of read-only external registers that can be read by the external resources in addition to accessed by the internal resources, and a set of read/write external registers that can be read and written by both the internal and the external resources. Each access request by a source to the registers includes the source type, wherein access request is granted or denied based on the matching between the source bits in the access request and the register classification bits of the one or more registers to be accessed.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: November 28, 2023
    Assignee: Marvell Asia Pte Ltd
    Inventors: Ramacharan Sundararaman, Saurabh Shrivastava, Avinash Sodani, Nithyananda Miyar
  • Publication number: 20230216797
    Abstract: Embodiments of the present invention relate to a centralized network analytic device, the centralized network analytic device efficiently uses on-chip memory to flexibly perform counting, traffic rate monitoring and flow sampling. The device includes a pool of memory that is shared by all cores and packet processing stages of each core. The counting, the monitoring and the sampling are all defined through software allowing for greater flexibility and efficient analytics in the device. In some embodiments, the device is a network switch.
    Type: Application
    Filed: March 3, 2023
    Publication date: July 6, 2023
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Publication number: 20230205551
    Abstract: Systems, methods, and other embodiments associated with enabling client-side enforcement of custom rules when the client is in offline mode include storing a custom rule script that describes a custom rule in a mobile application client. Then, while the mobile application client is in an offline mode, accept a user input through the mobile application client to create or modify an object, determine that the custom rule script is associated with a triggering event that occurred due to the input to create or modify the object, and immediately enforce the custom rule script to validate the user input upon occurrence of the triggering event. The custom rule script is enforced prior to creating or modifying the object. Upon placement of the mobile application client into the online mode, synchronize the object as created or modified by the validated user input to a mobile application server.
    Type: Application
    Filed: March 10, 2023
    Publication date: June 29, 2023
    Inventors: Saurabh SHRIVASTAVA, Srikanth Doddadalivatta Venkatesh PRASAD
  • Patent number: 11687144
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Grant
    Filed: February 11, 2022
    Date of Patent: June 27, 2023
    Assignee: Marvell Asia Pte Ltd
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Patent number: 11627087
    Abstract: Embodiments of the present invention relate to a centralized network analytic device, the centralized network analytic device efficiently uses on-chip memory to flexibly perform counting, traffic rate monitoring and flow sampling. The device includes a pool of memory that is shared by all cores and packet processing stages of each core. The counting, the monitoring and the sampling are all defined through software allowing for greater flexibility and efficient analytics in the device. In some embodiments, the device is a network switch.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: April 11, 2023
    Assignee: MARVELL ASIA PTE, LTD
    Inventors: Weihuang Wang, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Patent number: 11614951
    Abstract: Systems, methods, and other embodiments associated with enabling client-side enforcement of custom rules when the client is in offline mode include: creating a custom rule for enforcement on a mobile application client on a mobile application server; defining characteristics of user accounts for which the mobile application client is to enforce the custom rule; in response to a connection being established between an instance of the mobile application client for a specific user account and the mobile application server, determining that the instance of the mobile application client should enforce the custom rule based at least in part on a match between characteristics of the specific user account and the defined characteristics; and transmitting the custom rule to the instance of the mobile application client to enable the instance to enforce the custom rule when the instance is operating in the offline mode.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: March 28, 2023
    Assignee: Oracle International Corporation
    Inventors: Saurabh Shrivastava, Srikanth Doddadalivatta Venkatesh Prasad
  • Patent number: 11609861
    Abstract: A method includes synthetizing a hardware description language (HDL) code into a netlist comprising a first a second and a third components. The method further includes allocating addresses to each component of the netlist. Each allocated address includes assigned addresses and unassigned addresses. An internal address space for a chip is formed based on the allocated addresses. The internal address space includes assigned addresses followed by unassigned addresses for the first component concatenated to the assigned addresses followed by unassigned addresses for the second component concatenated to the assigned addresses followed by unassigned addresses for the third component. An external address space for components outside of the chip is generated that includes only the assigned addresses of the first component concatenated to the assigned addresses of the second component concatenated to the assigned addresses of the third component. Internal addresses are translated to external addresses and vice versa.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: March 21, 2023
    Assignee: Marvell Asia Pte Ltd
    Inventors: Saurabh Shrivastava, Shrikant Sundaram, Guy T. Hutchison
  • Publication number: 20220404995
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programed based on how the tiles are allocated for each lookup.
    Type: Application
    Filed: July 27, 2022
    Publication date: December 22, 2022
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Patent number: 11436040
    Abstract: A new approach of systems and methods to support a hierarchical interrupt propagation scheme for efficient interrupt propagation and handling is proposed. The hierarchical interrupt propagation scheme organizes a plurality of slave interrupt handlers associated functional blocks in a chip in a hierarchy. When an exception or error condition occurs in a functional block, a slave interrupt handler associated with the functional block creates an interrupt packet as an interrupt notification and utilizes pre-existing input and output interfaces that have already been utilized for accessing registers of the functional block to transmit the created interrupt packet to a central interrupt handler through the hierarchy without running dedicated interconnect wires out of the functional block.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: September 6, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventors: Saurabh Shrivastava, Guy T. Hutchison
  • Patent number: 11435925
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programed based on how the tiles are allocated for each lookup.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: September 6, 2022
    Assignee: Marvell Asia PTE, LTD.
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava
  • Publication number: 20220164018
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Application
    Filed: February 11, 2022
    Publication date: May 26, 2022
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Patent number: 11287869
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: March 29, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Publication number: 20210341988
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Publication number: 20210318903
    Abstract: A new approach of systems and methods to support a hierarchical interrupt propagation scheme for efficient interrupt propagation and handling is proposed. The hierarchical interrupt propagation scheme organizes a plurality of slave interrupt handlers associated functional blocks in a chip in a hierarchy. When an exception or error condition occurs in a functional block, a slave interrupt handler associated with the functional block creates an interrupt packet as an interrupt notification and utilizes pre-existing input and output interfaces that have already been utilized for accessing registers of the functional block to transmit the created interrupt packet to a central interrupt handler through the hierarchy without running dedicated interconnect wires out of the functional block.
    Type: Application
    Filed: July 31, 2020
    Publication date: October 14, 2021
    Inventors: Saurabh Shrivastava, Guy T. Hutchinson
  • Publication number: 20210279079
    Abstract: Systems, methods, and other embodiments associated with enabling client-side enforcement of custom rules when the client is in offline mode include: creating a custom rule for enforcement on a mobile application client on a mobile application server; defining characteristics of user accounts for which the mobile application client is to enforce the custom rule; in response to a connection being established between an instance of the mobile application client for a specific user account and the mobile application server, determining that the instance of the mobile application client should enforce the custom rule based at least in part on a match between characteristics of the specific user account and the defined characteristics; and transmitting the custom rule to the instance of the mobile application client to enable the instance to enforce the custom rule when the instance is operating in the offline mode.
    Type: Application
    Filed: March 9, 2020
    Publication date: September 9, 2021
    Inventors: Saurabh SHRIVASTAVA, Srikanth Doddadalivatta Venkatesh PRASAD
  • Publication number: 20210034269
    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programed based on how the tiles are allocated for each lookup.
    Type: Application
    Filed: August 18, 2020
    Publication date: February 4, 2021
    Inventors: Anh T. Tran, Gerald Schmidt, Tsahi Daniel, Saurabh Shrivastava