Patents by Inventor Chidamber Kulkarni

Chidamber Kulkarni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240152357
    Abstract: Circuitry, systems, and methods are provided for an integrated circuit device including a programmable logic fabric. The programmable logic fabric is configured to implement software-defined vector engines. The programmable logic fabric also includes a data movement engine (DME) that uses multiple DME threads to programmably insert data within an interior of the software-defined vector engines.
    Type: Application
    Filed: December 28, 2023
    Publication date: May 9, 2024
    Inventor: Chidamber Kulkarni
  • Patent number: 11537542
    Abstract: Disclosed approaches eliminate involving a bus interface in polling by the host computer system and the peripheral component for events to coordinate direct memory access (DMA) transfers. The host polls main memory for DMA events communicated by the peripheral component, and the peripheral component polls local registers for DMA addresses to initiate DMA transfers. DMA transfers are initiated by the host storing main memory addresses in the local registers of the peripheral component, and DMA events generated by the peripheral component are stored in the main memory.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: December 27, 2022
    Assignee: MARVELL ASIA PTE LTD.
    Inventors: Syam Prasad, Amarnath Vishwakarma, Chidamber Kulkarni, Prasanna Sukumar
  • Patent number: 11429595
    Abstract: A database proxy includes a computing device and a hardware-accelerated database proxy module. The computing device and the database proxy module. The database proxy is configured to receive a write request from a client; store the write request in a commit log in a first non-volatile memory device; in response to storing the write request in the commit log, return to the client a signal acknowledging success of the write request; store the write request in a cache in a second non-volatile memory device; cause the write request to be written in a database store; and based on a first determination that the write request is stored in the cache and on a second determination that the write request is written in the database store, remove the write request from the commit log.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: August 30, 2022
    Assignee: MARVELL ASIA PTE LTD.
    Inventors: Amarnath Vishwakarma, Syam Prasad, Murali Krishna, Ashutosh Sharma, Kuladeep Sai Reddy, Vaibhav Jain, Prasanna Sukumar, Chidamber Kulkarni, Prasanna Sundararajan
  • Patent number: 11349922
    Abstract: A database proxy includes a computing device and a hardware-accelerated database proxy module. The computing device includes one or more processors, memory, a first bus interface, and a network interface coupling the database proxy to one or more networks. The database proxy module includes a second bus interface coupled to the first bus interface via one or more buses, and a request processor. The database proxy is configured to receive a database read request from a client via the one or more networks and the network interface; forward the database read request to the request processor using the one or more buses; process, using the request processor, the database read request; and return results of the database read request to the client. In some embodiments, the computing device or the database proxy module further includes a flash memory interface for accessing one or more flash memory devices.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: May 31, 2022
    Assignee: MARVELL ASIA PTE LTD.
    Inventors: Chidamber Kulkarni, Amarnath Vishwakarma, Raushan Raj, Vijaya Raghava Chiyedu, Rahul Sachdev, Rahul Jain, Prasanna Sukumar, Prasanna Sundararajan
  • Patent number: 11256515
    Abstract: Techniques for accelerating compaction include compaction accelerator. The compaction accelerator includes a compactor separate from a processor performing read and write operations for a database or a data store. The compactor includes a plurality of compaction resources. The compactor is configured to receive a compaction request and data to be compacted, compact the data via a compaction pipeline to generate compacted data, and forward the compacted data to the processor, the database, or the data store. The compaction pipeline has a first portion of the plurality of compaction resources.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: February 22, 2022
    Assignee: MARVELL ASIA PTE LTD.
    Inventors: Chidamber Kulkarni, Rahul Jain, Prasanna Sundararajan
  • Publication number: 20210311930
    Abstract: A database proxy includes a computing device and a hardware-accelerated database proxy module. The computing device and the database proxy module. The database proxy is configured to receive a write request from a client; store the write request in a commit log in a first non-volatile memory device; in response to storing the write request in the commit log, return to the client a signal acknowledging success of the write request; store the write request in a cache in a second non-volatile memory device; cause the write request to be written in a database store; and based on a first determination that the write request is stored in the cache and on a second determination that the write request is written in the database store, remove the write request from the commit log.
    Type: Application
    Filed: April 1, 2020
    Publication date: October 7, 2021
    Inventors: Amarnath VISHWAKARMA, Syam PRASAD, Murali KRISHNA, Ashutosh SHARMA, Kuladeep Sai REDDY, Vaibhav JAIN, Prasanna SUKUMAR, Chidamber KULKARNI, Prasanna SUNDARARAJAN
  • Patent number: 11126600
    Abstract: A system and method for accelerating compaction includes a compaction accelerator. The accelerator includes a compactor separate from a processor performing read and write operations for a database or a data store. The compactor is configured to receive a table to be compacted and entries written in the table, each of the entries being associated with a timestamp indicating when they were respectively written; identify, using a plurality of sort engines operating in parallel, the entries that were written last based on the timestamps; mark, using a plurality of marker engines operating in parallel, older copies of the entries for deletion; create, using the plurality of marker engines, tombstones for the older copies; create a compacted table, including the entries that were last written; delete the tombstones and the entries associated with the tombstones; and generate a freemap based on storage locations of the entries associated with the tombstones.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: September 21, 2021
    Assignee: RENIAC, INC.
    Inventors: Chidamber Kulkarni, Prasanna Sundararajan
  • Patent number: 11044314
    Abstract: A database proxy includes a request processor, a cache, a database plugin, and interfaces for coupling the database proxy client devices, other database proxies, and database servers. The request processor is configured to receive a read request from a client, determine whether the read request is assigned to the database proxy, and return results of the read request to the client. When the read request is not assigned to the database proxy, the read request is forwarded to another database proxy. When the read request is assigned to the database proxy, the read request is processed using data stored in the cache when the results are stored in the cache or forwarded to the database plugin, which forwards the read request to a database server, receives the results from the database server, and returns the results to the request processor for storage in the cache.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: June 22, 2021
    Assignee: RENIAC, INC.
    Inventors: Chidamber Kulkarni, Aditya Alurkar, Pradeep Mishra, Prasanna Sukumar, Vijaya Raghava, Raushan Raj, Rahul Sachdev, Gurshaant Singh Malik, Ajit Mathew, Prasanna Sundararajan
  • Patent number: 10931587
    Abstract: A system includes a field-programmable gate array (FPGA) with a configurable logic module. The configurable logic module is configured to implement a protocol endpoint, the protocol endpoint including a congestion control module. In some examples, the protocol endpoint corresponds to a transport control protocol (TCP) endpoint. In some examples, state information associated with a networking protocol implemented by the protocol endpoint is stored in and retrieved from block memory of the configurable logic module. In some examples, no state information associated with the networking protocol is stored in and retrieved from a memory other than the block memory. In further examples, the congestion control module is configured to perform operations comprising monitoring a congestion condition of a network and dynamically switching among a plurality of congestion control algorithms based on the monitored congestion condition.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: February 23, 2021
    Assignee: RENIAC, INC.
    Inventors: Chidamber Kulkarni, Prasanna Sundararajan
  • Publication number: 20200379775
    Abstract: Techniques for accelerating compaction include compaction accelerator. The compaction accelerator includes a compactor separate from a processor performing read and write operations for a database or a data store. The compactor includes a plurality of compaction resources. The compactor is configured to receive a compaction request and data to be compacted, compact the data via a compaction pipeline to generate compacted data, and forward the compacted data to the processor, the database, or the data store. The compaction pipeline has a first portion of the plurality of compaction resources.
    Type: Application
    Filed: May 27, 2020
    Publication date: December 3, 2020
    Inventors: Chidamber KULKARNI, Rahul JAIN, Prasanna SUNDARARAJAN
  • Publication number: 20200059515
    Abstract: A database proxy includes a computing device and a hardware-accelerated database proxy module. The computing device includes one or more processors, memory, a first bus interface, and a network interface coupling the database proxy to one or more networks. The database proxy module includes a second bus interface coupled to the first bus interface via one or more buses, and a request processor. The database proxy is configured to receive a database read request from a client via the one or more networks and the network interface; forward the database read request to the request processor using the one or more buses; process, using the request processor, the database read request; and return results of the database read request to the client. In some embodiments, the computing device or the database proxy module further includes a flash memory interface for accessing one or more flash memory devices.
    Type: Application
    Filed: October 23, 2019
    Publication date: February 20, 2020
    Inventors: Chidamber KULKARNI, Amarnath VISHWAKARMA, Raushan RAJ, Vijaya Raghava CHIYEDU, Rahul SACHDEV, Rahul JAIN, Prasanna SUKUMAR, Prasanna SUNDARARAJAN
  • Publication number: 20190273782
    Abstract: A database proxy includes a request processor, a cache, a database plugin, and interfaces for coupling the database proxy client devices, other database proxies, and database servers. The request processor is configured to receive a read request from a client, determine whether the read request is assigned to the database proxy, and return results of the read request to the client. When the read request is not assigned to the database proxy, the read request is forwarded to another database proxy. When the read request is assigned to the database proxy, the read request is processed using data stored in the cache when the results are stored in the cache or forwarded to the database plugin, which forwards the read request to a database server, receives the results from the database server, and returns the results to the request processor for storage in the cache.
    Type: Application
    Filed: March 13, 2019
    Publication date: September 5, 2019
    Inventors: Chidamber KULKARNI, Aditya ALURKAR, Pradeep MISHRA, Prasanna SUKUMAR, Vijaya RAGHAVA, Raushan RAJ, Rahul SACHDEV, Gurshaant Singh MALIK, Ajit MATHEW, Prasanna SUNDARARAJAN
  • Publication number: 20190182170
    Abstract: A system includes a field-programmable gate array (FPGA) with a configurable logic module. The configurable logic module is configured to implement a protocol endpoint, the protocol endpoint including a congestion control module. In some examples, the protocol endpoint corresponds to a transport control protocol (TCP) endpoint. In some examples, state information associated with a networking protocol implemented by the protocol endpoint is stored in and retrieved from block memory of the configurable logic module. In some examples, no state information associated with the networking protocol is stored in and retrieved from a memory other than the block memory. In further examples, the congestion control module is configured to perform operations comprising monitoring a congestion condition of a network and dynamically switching among a plurality of congestion control algorithms based on the monitored congestion condition.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 13, 2019
    Inventors: Chidamber KULKARNI, Prasanna Sundararajan
  • Patent number: 10237350
    Abstract: A database proxy includes a request processor, a cache, a database plugin, and interfaces for coupling the database proxy client devices, other database proxies, and database servers. The request processor is configured to receive a read request from a client, determine whether the read request is assigned to the database proxy, and return results of the read request to the client. When the read request is not assigned to the database proxy, the read request is forwarded to another database proxy. When the read request is assigned to the database proxy, the read request is processed using data stored in the cache when the results are stored in the cache or forwarded to the database plugin, which forwards the read request to a database server, receives the results from the database server, and returns the results to the request processor for storage in the cache.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: March 19, 2019
    Assignee: RENIAC, INC.
    Inventors: Chidamber Kulkarni, Aditya Alurkar, Pradeep Mishra, Prasanna Sukumar, Vijaya Raghava, Raushan Raj, Rahul Sachdev, Gurshaant Singh Malik, Ajit Mathew, Prasanna Sundararajan
  • Publication number: 20180307711
    Abstract: A system and method for accelerating compaction includes a compaction accelerator. The accelerator includes a compactor separate from a processor performing read and write operations for a database or a data store. The compactor is configured to receive a table to be compacted and entries written in the table, each of the entries being associated with a timestamp indicating when they were respectively written; identify, using a plurality of sort engines operating in parallel, the entries that were written last based on the timestamps; mark, using a plurality of marker engines operating in parallel, older copies of the entries for deletion; create, using the plurality of marker engines, tombstones for the older copies; create a compacted table, including the entries that were last written; delete the tombstones and the entries associated with the tombstones; and generate a freemap based on storage locations of the entries associated with the tombstones.
    Type: Application
    Filed: April 24, 2018
    Publication date: October 25, 2018
    Applicant: RENIAC, INC.
    Inventors: Chidamber Kulkarni, Prasanna Sundarajan
  • Patent number: 10049035
    Abstract: A disclosed stream memory management circuit includes a first memory controller circuit for accessing a first memory of a first type. A second memory controller circuit is provided for accessing a second memory of a second type different from the first type. An access circuit is coupled to the first and second memory controller circuits for inputting and outputting streaming data. An allocation circuit is coupled to the access circuit, the allocation circuit configured and arranged to select either the first memory or the second memory for allocation of storage for the streaming data in response to attributes associated with the streaming data. A de-allocation circuit is coupled to the access circuit for de-allocating storage assigned to the streaming data from the first and second memories.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: August 14, 2018
    Assignee: Reniac, Inc.
    Inventors: Chidamber Kulkarni, Prasanna Sundararajan
  • Publication number: 20170295236
    Abstract: A database proxy includes a request processor, a cache, a database plugin, and interfaces for coupling the database proxy client devices, other database proxies, and database servers. The request processor is configured to receive a read request from a client, determine whether the read request is assigned to the database proxy, and return results of the read request to the client. When the read request is not assigned to the database proxy, the read request is forwarded to another database proxy. When the read request is assigned to the database proxy, the read request is processed using data stored in the cache when the results are stored in the cache or forwarded to the database plugin, which forwards the read request to a database server, receives the results from the database server, and returns the results to the request processor for storage in the cache.
    Type: Application
    Filed: November 7, 2016
    Publication date: October 12, 2017
    Inventors: CHIDAMBER KULKARNI, Aditya Alurkar, Pradeep Mishra, Prasanna Sukumar, Vijaya Raghava, Raushan Raj, Rahul Sachdev, Gurshaant Singh Malik, Ajit Mathew, Prasanna Sundararajan
  • Patent number: 9286221
    Abstract: A heterogeneous memory system includes a main memory arrangement, a first-level cache, a second-level cache, and a memory management unit (MMU). The first-level cache includes an SRAM arrangement and the second-level cache includes a DRAM arrangement. The MMU is configured and arranged to read first data from the main memory arrangement in response to a stored first value associated with the first data and indicative of a start time. The MMU selects one of the first-level cache or the second-level cache for storage of the first data and stores the first data in the selected one of the first or second-level caches. The MMU reads second data from one of the first-level cache or second-level cache and writes the data to the main memory arrangement in response to a stored second value associated with the second data and indicative of a duration.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: March 15, 2016
    Assignee: Reniac, Inc.
    Inventors: Prasanna Sundararajan, Chidamber Kulkarni
  • Patent number: 9262325
    Abstract: A heterogeneous memory system includes a network interface card, a main memory arrangement, a first-level cache, and a memory management unit (MMU). The main memory arrangement, first-level cache and the MMU are disposed on the network interface card. The first-level cache includes an SRAM arrangement and a DRAM arrangement. The MMU is configured and arranged to read first data from the main memory arrangement in response to a stored first value associated with the first data and indicative of a start time. The MMU selects one of the SRAM arrangement or the DRAM arrangement for storage of the first data and stores the first data in the selected one of the SRAM arrangement or DRAM arrangement. The MMU reads second data from one of the SRAM arrangement or DRAM arrangement and writes the data to the main memory arrangement in response to a stored second value associated with the second data and indicative of a duration.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: February 16, 2016
    Assignee: Reniac, Inc.
    Inventors: Prasanna Sundararajan, Chidamber Kulkarni
  • Patent number: 9043557
    Abstract: A heterogeneous memory system includes a main memory arrangement, a first-level cache, and a memory management unit (MMU). The first-level cache includes an SRAM arrangement and a DRAM arrangement. The MMU is configured and arranged to read first data from the main memory arrangement in response to a stored first value associated with the first data and indicative of a start time. The MMU selects one of the SRAM arrangement or the DRAM arrangement for storage of the first data and stores the first data in the selected one of the SRAM arrangement or DRAM arrangement. The MMU reads second data from one of the SRAM arrangement or DRAM arrangement and writes the data to the main memory arrangement in response to a stored second value associated with the second data and indicative of a duration.
    Type: Grant
    Filed: June 5, 2013
    Date of Patent: May 26, 2015
    Inventors: Prasanna Sundararajan, Chidamber Kulkarni