Patents by Inventor Sreenivas Krishnan

Sreenivas Krishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170123686
    Abstract: A system and method for managing garbage collection in Solid State Drives (SSDs) (120-1, 120-2, 120-3, 120-4, 120-5) in a Redundant Array of Independent Disks (RAID) configuration, using a RAID controller (115) is described. A control logic (505) can control read and write requests (805, 810) for the SSDs (120-1, 120-2, 120-3, 120-4, 120-5) in the RAID configuration. A selection logic (515) can select an SSD for garbage collection. Setup logic (520) can instruct the selected SSD to enter a garbage collection setup phase (920). An execute logic (525) can instruct the selected SSD to enter and exit the garbage collection execute phase (925).
    Type: Application
    Filed: January 19, 2016
    Publication date: May 4, 2017
    Inventors: Oscar PINTO, Sreenivas KRISHNAN
  • Publication number: 20160335216
    Abstract: A computer network system configured with disaggregated inputs/outputs. This system can be configured in a leaf-spine architecture and can include a router coupled to a network source, a plurality of core switches coupled to the router, a plurality of aggregator switches coupled to each of the plurality of core switches, and a plurality of rack modules coupled to each of the plurality of aggregator switches. The plurality of rack modules can each include an I/O appliance with a downstream aggregator module, a plurality of server devices each with PCIe interfaces, and an upstream aggregator module that aggregates each of the PCIe interfaces. A high-speed link can be configured between the downstream and upstream aggregator modules via aggregation of many serial lanes to provide reliable high speed bit stream transport over long distances, which allows for better utilization of resources and scalability of memory capacity independent of the server count.
    Type: Application
    Filed: July 29, 2016
    Publication date: November 17, 2016
    Inventors: Sreenivas KRISHNAN, Nirmal Raj SAXENA
  • Publication number: 20160328347
    Abstract: A flexible storage system. A storage motherboard accommodates, on a suitable connector, a storage adapter circuit that provides protocol translation between a host bus interface and a storage interface, and that provides routing, to accommodate a plurality of mass storage devices that may be connected to the storage adapter circuit through the storage motherboard. The storage adapter circuit may be replaced with a circuit supporting a different host interface or a different storage interface.
    Type: Application
    Filed: April 4, 2016
    Publication date: November 10, 2016
    Inventors: Fred Worley, Harry Rogers, Sreenivas Krishnan, Zhan Ping, Michael Scriber
  • Patent number: 9430437
    Abstract: A computer network system configured with disaggregated inputs/outputs. This system can be configured in a leaf-spine architecture and can include a router coupled to a network source, a plurality of core switches coupled to the router, a plurality of aggregator switches coupled to each of the plurality of core switches, and a plurality of rack modules coupled to each of the plurality of aggregator switches. The plurality of rack modules can each include an I/O appliance with a downstream aggregator module, a plurality of server devices each with PCIe interfaces, and an upstream aggregator module that aggregates each of the PCIe interfaces. A high-speed link can be configured between the downstream and upstream aggregator modules via aggregation of many serial lanes to provide reliable high speed bit stream transport over long distances, which allows for better utilization of resources and scalability of memory capacity independent of the server count.
    Type: Grant
    Filed: August 9, 2013
    Date of Patent: August 30, 2016
    Assignee: INPHI CORPORATION
    Inventors: Sreenivas Krishnan, Nirmal Raj Saxena
  • Publication number: 20160110136
    Abstract: Techniques for a massively parallel and memory centric computing system. The system has a plurality of processing units operably coupled to each other through one or more communication channels. Each of the plurality of processing units has an ISMn interface device. Each of the plurality of ISMn interface devices is coupled to an ISMe endpoint connected to each of the processing units. The system has a plurality of DRAM or Flash memories configured in a disaggregated architecture and one or more switch nodes operably coupling the plurality of DRAM or Flash memories in the disaggregated architecture. The system has a plurality of high speed optical cables configured to communicate at a transmission rate of 100 G or greater to facilitate communication from any one of the plurality of processing units to any one of the plurality of DRAM or Flash memories.
    Type: Application
    Filed: December 18, 2015
    Publication date: April 21, 2016
    Inventors: Nirmal Raj SAXENA, Sreenivas KRISHNAN, David WANG
  • Patent number: 9250831
    Abstract: Techniques for a massively parallel and memory centric computing system. The system has a plurality of processing units operably coupled to each other through one or more communication channels. Each of the plurality of processing units has an ISMn interface device. Each of the plurality of ISMn interface devices is coupled to an ISMe endpoint connected to each of the processing units. The system has a plurality of DRAM or Flash memories configured in a disaggregated architecture and one or more switch nodes operably coupling the plurality of DRAM or Flash memories in the disaggregated architecture. The system has a plurality of high speed optical cables configured to communicate at a transmission rate of 100 G or greater to facilitate communication from any one of the plurality of processing units to any one of the plurality of DRAM or Flash memories.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: February 2, 2016
    Assignee: INPHI CORPORATION
    Inventors: Nirmal Raj Saxena, Sreenivas Krishnan, David Wang
  • Publication number: 20140379846
    Abstract: A memory access pipeline within a subsystem is configured to manage memory access requests that are issued by clients of the subsystem. The memory access pipeline is capable of providing a software baseband controller client with sufficient memory bandwidth to initiate and maintain network connections. The memory access pipeline includes a tiered snap arbiter that prioritizes memory access requests. The memory access pipeline also includes a digital differential analyzer that monitors the amount of bandwidth consumed by each client and causes the tiered snap arbiter to buffer memory access requests associated with clients consuming excessive bandwidth. The memory access pipeline also includes a transaction store and latency analyzer configured to buffer pages associated with the baseband controller and to expedite memory access requests issued by the baseband controller when the latency associated with those requests exceeds a pre-set value.
    Type: Application
    Filed: June 20, 2013
    Publication date: December 25, 2014
    Applicant: NVIDIA CORPORATION
    Inventors: Mrudula KANURI, Sreenivas KRISHNAN
  • Patent number: 8489851
    Abstract: A memory controller provided according to an aspect of the present invention includes a predictor block which predicts future read requests after converting the memory address in a prior read request received from the processor to an address space consistent with the implementation of a memory unit. According to another aspect of the present invention, the predicted requests are granted access to a memory unit only when there are no requests pending from processors and the peripherals sending access requests to the memory unit.
    Type: Grant
    Filed: December 11, 2008
    Date of Patent: July 16, 2013
    Assignee: NVIDIA Corporation
    Inventors: Balajee Vamanan, Tukaram Methar, Mrudula Kanuri, Sreenivas Krishnan
  • Patent number: 8261121
    Abstract: A method includes operating an arbitration logic of a memory controller at a core clock frequency lower than that of a memory clock frequency. The memory controller is configured to generate a command sequence including a number of commands in accordance with a number of external requests to access the memory. The method also includes parallelizing the number of commands in the command sequence based on a timing requirement for a non-first command in the command sequence defined by a memory-access protocol being satisfied at a rising edge or a falling edge of the core clock relative to a previous command in the command sequence. Further, the method includes ensuring, through the parallelizing, availability of the number of commands in the command sequence to a memory interface operating at the memory clock frequency at a command rate equal to the memory clock frequency.
    Type: Grant
    Filed: December 24, 2009
    Date of Patent: September 4, 2012
    Assignee: Nvidia Corporation
    Inventors: Tukaram Shankar Methar, Balajee Vamanan, Sreenivas Krishnan
  • Publication number: 20110161713
    Abstract: A method includes operating an arbitration logic of a memory controller at a core clock frequency lower than that of a memory clock frequency. The memory controller is configured to generate a command sequence including a number of commands in accordance with a number of external requests to access the memory. The method also includes parallelizing the number of commands in the command sequence based on a timing requirement for a non-first command in the command sequence defined by a memory-access protocol being satisfied at a rising edge or a falling edge of the core clock relative to a previous command in the command sequence. Further, the method includes ensuring, through the parallelizing, availability of the number of commands in the command sequence to a memory interface operating at the memory clock frequency at a command rate equal to the memory clock frequency.
    Type: Application
    Filed: December 24, 2009
    Publication date: June 30, 2011
    Inventors: TUKARAM SHANKAR METHAR, Balajee Vamanan, Sreenivas Krishnan
  • Publication number: 20100153661
    Abstract: A memory controller provided according to an aspect of the present invention includes a predictor block which predicts future read requests after converting the memory address in a prior read request received from the processor to an address space consistent with the implementation of a memory unit. According to another aspect of the present invention, the predicted requests are granted access to a memory unit only when there are no requests pending from processors and the peripherals sending access requests to the memory unit.
    Type: Application
    Filed: December 11, 2008
    Publication date: June 17, 2010
    Applicant: NVIDIA Corporation
    Inventors: Balajee Vamanan, Tukaram Methar, Mrudula Kanuri, Sreenivas Krishnan