Patents by Inventor Mrudula Kanuri

Mrudula Kanuri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150123977
    Abstract: A method for synchronizing a plurality of pixel processing units is disclosed. The method includes sending a first trigger to a first pixel processing unit to execute a first operation on a portion of a frame of data. The method also includes sending a second trigger to a second pixel processing unit to execute a second operation on the portion of the frame of data when the first operation has completed. The first operation has completed when the first operation reaches a sub-frame boundary.
    Type: Application
    Filed: November 6, 2013
    Publication date: May 7, 2015
    Applicant: Nvidia Corporation
    Inventors: Mrudula KANURI, Kamal JEET
  • Publication number: 20140379846
    Abstract: A memory access pipeline within a subsystem is configured to manage memory access requests that are issued by clients of the subsystem. The memory access pipeline is capable of providing a software baseband controller client with sufficient memory bandwidth to initiate and maintain network connections. The memory access pipeline includes a tiered snap arbiter that prioritizes memory access requests. The memory access pipeline also includes a digital differential analyzer that monitors the amount of bandwidth consumed by each client and causes the tiered snap arbiter to buffer memory access requests associated with clients consuming excessive bandwidth. The memory access pipeline also includes a transaction store and latency analyzer configured to buffer pages associated with the baseband controller and to expedite memory access requests issued by the baseband controller when the latency associated with those requests exceeds a pre-set value.
    Type: Application
    Filed: June 20, 2013
    Publication date: December 25, 2014
    Applicant: NVIDIA CORPORATION
    Inventors: Mrudula KANURI, Sreenivas KRISHNAN
  • Patent number: 8683126
    Abstract: A storage controller which uses the same buffer to store data elements retrieved from different secondary storage units. In an embodiment, the controller retrieves location descriptors ahead of when data is available for storing in a target memory. Each location descriptor indicates the memory locations at which data received from a secondary storage is to be stored. Only a subset of the location descriptors may be retrieved and stored ahead when processing each request. Due to such retrieval and storing of limited number of location descriptors, the size of a buffer used by the storage controller may be reduced. Due to retrieval of the location descriptors ahead, unneeded buffering of the data elements within the storage controller is avoided, reducing the latency in writing the data into the main memory, thus improving performance.
    Type: Grant
    Filed: July 30, 2007
    Date of Patent: March 25, 2014
    Assignee: Nvidia Corporation
    Inventor: Mrudula Kanuri
  • Patent number: 8549170
    Abstract: A system and method are provided for performing the retransmission of data in a network. Included is an offload engine in communication with system memory and a network. The offload engine serves for managing the retransmission of data transmitted in the network.
    Type: Grant
    Filed: December 19, 2003
    Date of Patent: October 1, 2013
    Assignee: NVIDIA Corporation
    Inventors: John Shigeto Minami, Michael Ward Johnson, Andrew Currid, Mrudula Kanuri
  • Patent number: 8489851
    Abstract: A memory controller provided according to an aspect of the present invention includes a predictor block which predicts future read requests after converting the memory address in a prior read request received from the processor to an address space consistent with the implementation of a memory unit. According to another aspect of the present invention, the predicted requests are granted access to a memory unit only when there are no requests pending from processors and the peripherals sending access requests to the memory unit.
    Type: Grant
    Filed: December 11, 2008
    Date of Patent: July 16, 2013
    Assignee: NVIDIA Corporation
    Inventors: Balajee Vamanan, Tukaram Methar, Mrudula Kanuri, Sreenivas Krishnan
  • Patent number: 8370552
    Abstract: A scheduler provided according to an aspect of the present invention provides higher priority for data units in a low priority queue upon occurrence of a starvation condition, and to packets in a high priority queue otherwise. The scheduler permits retransmission of a data unit in the lower priority queue when in the starvation condition, but clears the starvation condition when the data unit is retransmitted a pre-specified number of times. As a result, the data units in the higher priority queue would continue to be processed, thereby avoiding a deadlock at least in certain situations.
    Type: Grant
    Filed: October 14, 2008
    Date of Patent: February 5, 2013
    Assignee: Nvidia Corporation
    Inventors: Aditya Mittal, Mrudula Kanuri, Venkata Malladi
  • Patent number: 8065439
    Abstract: A system, method, and related data structure are provided for transmitting data in a network. Included is a data object (i.e. metadata) for communicating between a first network protocol layer and a second network protocol layer. In use, the data object facilitates network communication management utilizing a transport offload engine.
    Type: Grant
    Filed: December 19, 2003
    Date of Patent: November 22, 2011
    Assignee: NVIDIA Corporation
    Inventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
  • Patent number: 7957379
    Abstract: A system and method are provided for processing packets received via a network. In use, data packets and control packets are received via a network. Further, the data packets are processed in parallel with the control packets.
    Type: Grant
    Filed: October 19, 2004
    Date of Patent: June 7, 2011
    Assignee: NVIDIA Corporation
    Inventors: John Shigeto Minami, Robia Y. Uyeshiro, Thien E. Ooi, Michael Ward Johnson, Mrudula Kanuri
  • Publication number: 20100153661
    Abstract: A memory controller provided according to an aspect of the present invention includes a predictor block which predicts future read requests after converting the memory address in a prior read request received from the processor to an address space consistent with the implementation of a memory unit. According to another aspect of the present invention, the predicted requests are granted access to a memory unit only when there are no requests pending from processors and the peripherals sending access requests to the memory unit.
    Type: Application
    Filed: December 11, 2008
    Publication date: June 17, 2010
    Applicant: NVIDIA Corporation
    Inventors: Balajee Vamanan, Tukaram Methar, Mrudula Kanuri, Sreenivas Krishnan
  • Publication number: 20100095036
    Abstract: A scheduler provided according to an aspect of the present invention provides higher priority for data units in a low priority queue upon occurrence of a starvation condition, and to packets in a high priority queue otherwise. The scheduler permits retransmission of a data unit in the lower priority queue when in the starvation condition, but clears the starvation condition when the data unit is retransmitted a pre-specified number of times. As a result, the data units in the higher priority queue would continue to be processed, thereby avoiding a deadlock at least in certain situations.
    Type: Application
    Filed: October 14, 2008
    Publication date: April 15, 2010
    Applicant: NVIDIA Corporation
    Inventors: Aditya Mittal, Mrudula Kanuri, Venkata Malladi
  • Patent number: 7624198
    Abstract: A system and method are provided for communicating data in a network utilizing a transport offload engine. Included is a data list object that describes how data communicated in a network is to be stored (i.e. placed, etc.) in memory (i.e. application memory). Stored in association (i.e. located, kept together, etc.) with the data list object is a sequence object. Such sequence object identifies a sequence space associated with the data to be stored using the data list object. To this end, the sequence object is used by a transport offload engine to determine whether or not incoming data is to be stored using the data list object.
    Type: Grant
    Filed: December 19, 2003
    Date of Patent: November 24, 2009
    Assignee: NVIDIA Corporation
    Inventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
  • Patent number: 7502366
    Abstract: A network switch includes network switch ports, each including a port filter configured for detecting user-selected attributes from a received layer 2 type data frame. Each port filter, upon detecting a user-selected attribute in a received layer 2 type data frame, sends a signal to a switching module indicating the determined presence of the user-selected attribute, enabling the switching module to generate a switching decision based on the corresponding user-selected attribute and based on a corresponding user-defined switching policy. The switching policy may specify a priority class, or a guaranteed quality of service (e.g., a guaranteed bandwidth), ensuring that the received layer 2 type data frame receives the appropriate switching support. The user-selected attributes for the port filter and the user-defined switching policy for the switching module are programmed by a host processor.
    Type: Grant
    Filed: May 23, 2000
    Date of Patent: March 10, 2009
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bahadir Erimli, Gopal S. Krishna, Chandan Egbert, Peter Ka-Fai Chow, Mrudula Kanuri, Shr-Jie Tzeng, Somnath Viswanath, Xiaohua Zhuang
  • Publication number: 20090037689
    Abstract: A storage controller which uses the same buffer to store data elements retrieved from different secondary storage units. In an embodiment, the controller retrieves location descriptors ahead of when data is available for storing in a target memory. Each location descriptor indicates the memory locations at which data received from a secondary storage is to be stored. Only a subset of the location descriptors may be retrieved and stored ahead when processing each request. Due to such retrieval and storing of limited number of location descriptors, the size of a buffer used by the storage controller may be reduced. Due to retrieval of the location descriptors ahead, unneeded buffering of the data elements within the storage controller is avoided, reducing the latency in writing the data into the main memory, thus improving performance.
    Type: Application
    Filed: July 30, 2007
    Publication date: February 5, 2009
    Applicant: NVIDIA Corporation
    Inventor: Mrudula Kanuri
  • Patent number: 7293113
    Abstract: A communication processor comprises a data link layer parser circuit (310) and a plurality of network layer parser circuits (322, 326). The data link layer parser circuit (310) receives a data link layer frame, and removes a data link layer header therefrom to provide a network layer frame as an output. Each network layer parser circuit corresponds to a different network layer protocol, and is selectively activated to receive the network layer frame and to process a network layer header therefrom to provide a transport layer frame as an output. The data link layer parser circuit (310) further examines a portion of the network layer frame to determine which of the plurality of network protocols is used. The data link layer parser circuit (310) activates a corresponding one of the plurality of network layer parser circuits (322, 326) in response, while keeping another one of the plurality of network layer parser circuits (322, 326) inactive.
    Type: Grant
    Filed: May 28, 2003
    Date of Patent: November 6, 2007
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gopal Krishna, Mrudula Kanuri
  • Patent number: 7260631
    Abstract: An Internet small computer system interface (iSCSI) system, method and associated data structure are provided for receiving data in protocol data units. After a protocol data unit is received, a data list is identified that describes how the data contained in the protocol data unit is to be stored (i.e. placed, saved, etc.) in memory (i.e. application memory). Further stored is a state of the data list. To this end, the state of the data list is used in conjunction with the storage of data from a subsequent protocol data unit.
    Type: Grant
    Filed: December 19, 2003
    Date of Patent: August 21, 2007
    Assignee: NVIDIA Corporation
    Inventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
  • Patent number: 7103035
    Abstract: A network switch, configured for performing layer 2 and layer 3 switching in an Ethernet (IEEE 802.3) network without blocking of incoming data packets, includes a switching module for performing layer 2 and layer 3 switching operations, and a plurality of network switch ports, each configured for connecting the network switch to a corresponding subnetwork. The switching module includes a plurality of address tables for storing address information (e.g., layer 2 and layer 3 address and switching information), where each table is configured for storing the address information of a corresponding one of the subnetworks. The use of multiple address tables within the switching module enables the time for looking up address information to be substantially reduced, especially since the multiple address tables can be accessed independently and simultaneously by the switching module.
    Type: Grant
    Filed: January 14, 2000
    Date of Patent: September 5, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Mrudula Kanuri
  • Patent number: 7099285
    Abstract: A multiport switching device includes a configuration table that stores associations between addresses of subnets directly connected to the switching device and the port number of the multiport switching device that leads to the subnet. A host processor connected to the multiport switching device updates and maintains the configuration table. A remote processor communicates with the switching device through the host processor. To facilitate the communication of the remote processor with the multiport switch, the host processor executes a TCP/IP stack and the multiport switch is assigned a unique IP address.
    Type: Grant
    Filed: June 15, 2001
    Date of Patent: August 29, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mrudula Kanuri, Somnath Viswanath, Gopal S. Krishna
  • Patent number: 7079537
    Abstract: A network switch, configured for performing layer 2 and layer 3 switching in an Ethernet (IEEE 802.3) network without blocking of incoming data packets, includes a switching module for performing layer 2 and layer 3 (specifically Internet Protocol) switching operations, and a plurality of network switch ports, each configured for connecting the network switch to a corresponding subnetwork. The switching module includes address tables for storing address information (e.g., layer 2 and layer 3 address and switching information). The network switching module is configured for performing prescribed layer 3 switching that enables transfer of data packets between subnetworks, bypassing a router that normally would need to manage Internet protocol switching between subnetworks of the network. Hence, the network switch performs Internet Protocol switching for intranetwork (i.e., inter-subnetwork) traffic, improving efficiency of the router by enabling the router resources to support more subnetworks.
    Type: Grant
    Filed: April 25, 2000
    Date of Patent: July 18, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mrudula Kanuri, Chandan Egbert
  • Publication number: 20060083246
    Abstract: A system and method are provided for processing packets received via a network. In use, data packets and control packets are received via a network. Further, the data packets are processed in parallel with the control packets.
    Type: Application
    Filed: October 19, 2004
    Publication date: April 20, 2006
    Inventors: John Minami, Robia Uyeshiro, Thien Ooi, Michael Johnson, Mrudula Kanuri
  • Patent number: 7002955
    Abstract: A network switch, configured for performing layer 2 and layer 3 switching in an Ethernet (IEEE 802.3) network without blocking of incoming data packets, includes a network switch port having a packet classifier module configured for evaluating an incoming data packet on an instantaneous basis. The packet classifier module performs simultaneous comparisons between the incoming data stream of the data packet and multiple templates configured for identifying respective data protocols. Each template is composed of a plurality of min terms, wherein each min term specifies a prescribed comparison operation within a selected data byte of the incoming data packet. Hence, the packet classifier module is able to monitor data flows between two network nodes interacting according to a prescribed network application.
    Type: Grant
    Filed: March 6, 2000
    Date of Patent: February 21, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mrudula Kanuri, Gopal Krishna