Patents by Inventor Mrudula Kanuri
Mrudula Kanuri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20150123977Abstract: A method for synchronizing a plurality of pixel processing units is disclosed. The method includes sending a first trigger to a first pixel processing unit to execute a first operation on a portion of a frame of data. The method also includes sending a second trigger to a second pixel processing unit to execute a second operation on the portion of the frame of data when the first operation has completed. The first operation has completed when the first operation reaches a sub-frame boundary.Type: ApplicationFiled: November 6, 2013Publication date: May 7, 2015Applicant: Nvidia CorporationInventors: Mrudula KANURI, Kamal JEET
-
Publication number: 20140379846Abstract: A memory access pipeline within a subsystem is configured to manage memory access requests that are issued by clients of the subsystem. The memory access pipeline is capable of providing a software baseband controller client with sufficient memory bandwidth to initiate and maintain network connections. The memory access pipeline includes a tiered snap arbiter that prioritizes memory access requests. The memory access pipeline also includes a digital differential analyzer that monitors the amount of bandwidth consumed by each client and causes the tiered snap arbiter to buffer memory access requests associated with clients consuming excessive bandwidth. The memory access pipeline also includes a transaction store and latency analyzer configured to buffer pages associated with the baseband controller and to expedite memory access requests issued by the baseband controller when the latency associated with those requests exceeds a pre-set value.Type: ApplicationFiled: June 20, 2013Publication date: December 25, 2014Applicant: NVIDIA CORPORATIONInventors: Mrudula KANURI, Sreenivas KRISHNAN
-
Optimal use of buffer space by a storage controller which writes retrieved data directly to a memory
Patent number: 8683126Abstract: A storage controller which uses the same buffer to store data elements retrieved from different secondary storage units. In an embodiment, the controller retrieves location descriptors ahead of when data is available for storing in a target memory. Each location descriptor indicates the memory locations at which data received from a secondary storage is to be stored. Only a subset of the location descriptors may be retrieved and stored ahead when processing each request. Due to such retrieval and storing of limited number of location descriptors, the size of a buffer used by the storage controller may be reduced. Due to retrieval of the location descriptors ahead, unneeded buffering of the data elements within the storage controller is avoided, reducing the latency in writing the data into the main memory, thus improving performance.Type: GrantFiled: July 30, 2007Date of Patent: March 25, 2014Assignee: Nvidia CorporationInventor: Mrudula Kanuri -
Patent number: 8549170Abstract: A system and method are provided for performing the retransmission of data in a network. Included is an offload engine in communication with system memory and a network. The offload engine serves for managing the retransmission of data transmitted in the network.Type: GrantFiled: December 19, 2003Date of Patent: October 1, 2013Assignee: NVIDIA CorporationInventors: John Shigeto Minami, Michael Ward Johnson, Andrew Currid, Mrudula Kanuri
-
Patent number: 8489851Abstract: A memory controller provided according to an aspect of the present invention includes a predictor block which predicts future read requests after converting the memory address in a prior read request received from the processor to an address space consistent with the implementation of a memory unit. According to another aspect of the present invention, the predicted requests are granted access to a memory unit only when there are no requests pending from processors and the peripherals sending access requests to the memory unit.Type: GrantFiled: December 11, 2008Date of Patent: July 16, 2013Assignee: NVIDIA CorporationInventors: Balajee Vamanan, Tukaram Methar, Mrudula Kanuri, Sreenivas Krishnan
-
Patent number: 8370552Abstract: A scheduler provided according to an aspect of the present invention provides higher priority for data units in a low priority queue upon occurrence of a starvation condition, and to packets in a high priority queue otherwise. The scheduler permits retransmission of a data unit in the lower priority queue when in the starvation condition, but clears the starvation condition when the data unit is retransmitted a pre-specified number of times. As a result, the data units in the higher priority queue would continue to be processed, thereby avoiding a deadlock at least in certain situations.Type: GrantFiled: October 14, 2008Date of Patent: February 5, 2013Assignee: Nvidia CorporationInventors: Aditya Mittal, Mrudula Kanuri, Venkata Malladi
-
Patent number: 8065439Abstract: A system, method, and related data structure are provided for transmitting data in a network. Included is a data object (i.e. metadata) for communicating between a first network protocol layer and a second network protocol layer. In use, the data object facilitates network communication management utilizing a transport offload engine.Type: GrantFiled: December 19, 2003Date of Patent: November 22, 2011Assignee: NVIDIA CorporationInventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
-
Patent number: 7957379Abstract: A system and method are provided for processing packets received via a network. In use, data packets and control packets are received via a network. Further, the data packets are processed in parallel with the control packets.Type: GrantFiled: October 19, 2004Date of Patent: June 7, 2011Assignee: NVIDIA CorporationInventors: John Shigeto Minami, Robia Y. Uyeshiro, Thien E. Ooi, Michael Ward Johnson, Mrudula Kanuri
-
Publication number: 20100153661Abstract: A memory controller provided according to an aspect of the present invention includes a predictor block which predicts future read requests after converting the memory address in a prior read request received from the processor to an address space consistent with the implementation of a memory unit. According to another aspect of the present invention, the predicted requests are granted access to a memory unit only when there are no requests pending from processors and the peripherals sending access requests to the memory unit.Type: ApplicationFiled: December 11, 2008Publication date: June 17, 2010Applicant: NVIDIA CorporationInventors: Balajee Vamanan, Tukaram Methar, Mrudula Kanuri, Sreenivas Krishnan
-
Publication number: 20100095036Abstract: A scheduler provided according to an aspect of the present invention provides higher priority for data units in a low priority queue upon occurrence of a starvation condition, and to packets in a high priority queue otherwise. The scheduler permits retransmission of a data unit in the lower priority queue when in the starvation condition, but clears the starvation condition when the data unit is retransmitted a pre-specified number of times. As a result, the data units in the higher priority queue would continue to be processed, thereby avoiding a deadlock at least in certain situations.Type: ApplicationFiled: October 14, 2008Publication date: April 15, 2010Applicant: NVIDIA CorporationInventors: Aditya Mittal, Mrudula Kanuri, Venkata Malladi
-
Patent number: 7624198Abstract: A system and method are provided for communicating data in a network utilizing a transport offload engine. Included is a data list object that describes how data communicated in a network is to be stored (i.e. placed, etc.) in memory (i.e. application memory). Stored in association (i.e. located, kept together, etc.) with the data list object is a sequence object. Such sequence object identifies a sequence space associated with the data to be stored using the data list object. To this end, the sequence object is used by a transport offload engine to determine whether or not incoming data is to be stored using the data list object.Type: GrantFiled: December 19, 2003Date of Patent: November 24, 2009Assignee: NVIDIA CorporationInventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
-
Patent number: 7502366Abstract: A network switch includes network switch ports, each including a port filter configured for detecting user-selected attributes from a received layer 2 type data frame. Each port filter, upon detecting a user-selected attribute in a received layer 2 type data frame, sends a signal to a switching module indicating the determined presence of the user-selected attribute, enabling the switching module to generate a switching decision based on the corresponding user-selected attribute and based on a corresponding user-defined switching policy. The switching policy may specify a priority class, or a guaranteed quality of service (e.g., a guaranteed bandwidth), ensuring that the received layer 2 type data frame receives the appropriate switching support. The user-selected attributes for the port filter and the user-defined switching policy for the switching module are programmed by a host processor.Type: GrantFiled: May 23, 2000Date of Patent: March 10, 2009Assignee: Advanced Micro Devices, Inc.Inventors: Bahadir Erimli, Gopal S. Krishna, Chandan Egbert, Peter Ka-Fai Chow, Mrudula Kanuri, Shr-Jie Tzeng, Somnath Viswanath, Xiaohua Zhuang
-
Optimal Use of Buffer Space by a Storage Controller Which Writes Retrieved Data Directly to a Memory
Publication number: 20090037689Abstract: A storage controller which uses the same buffer to store data elements retrieved from different secondary storage units. In an embodiment, the controller retrieves location descriptors ahead of when data is available for storing in a target memory. Each location descriptor indicates the memory locations at which data received from a secondary storage is to be stored. Only a subset of the location descriptors may be retrieved and stored ahead when processing each request. Due to such retrieval and storing of limited number of location descriptors, the size of a buffer used by the storage controller may be reduced. Due to retrieval of the location descriptors ahead, unneeded buffering of the data elements within the storage controller is avoided, reducing the latency in writing the data into the main memory, thus improving performance.Type: ApplicationFiled: July 30, 2007Publication date: February 5, 2009Applicant: NVIDIA CorporationInventor: Mrudula Kanuri -
Patent number: 7293113Abstract: A communication processor comprises a data link layer parser circuit (310) and a plurality of network layer parser circuits (322, 326). The data link layer parser circuit (310) receives a data link layer frame, and removes a data link layer header therefrom to provide a network layer frame as an output. Each network layer parser circuit corresponds to a different network layer protocol, and is selectively activated to receive the network layer frame and to process a network layer header therefrom to provide a transport layer frame as an output. The data link layer parser circuit (310) further examines a portion of the network layer frame to determine which of the plurality of network protocols is used. The data link layer parser circuit (310) activates a corresponding one of the plurality of network layer parser circuits (322, 326) in response, while keeping another one of the plurality of network layer parser circuits (322, 326) inactive.Type: GrantFiled: May 28, 2003Date of Patent: November 6, 2007Assignee: Advanced Micro Devices, Inc.Inventors: Gopal Krishna, Mrudula Kanuri
-
Patent number: 7260631Abstract: An Internet small computer system interface (iSCSI) system, method and associated data structure are provided for receiving data in protocol data units. After a protocol data unit is received, a data list is identified that describes how the data contained in the protocol data unit is to be stored (i.e. placed, saved, etc.) in memory (i.e. application memory). Further stored is a state of the data list. To this end, the state of the data list is used in conjunction with the storage of data from a subsequent protocol data unit.Type: GrantFiled: December 19, 2003Date of Patent: August 21, 2007Assignee: NVIDIA CorporationInventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
-
Patent number: 7103035Abstract: A network switch, configured for performing layer 2 and layer 3 switching in an Ethernet (IEEE 802.3) network without blocking of incoming data packets, includes a switching module for performing layer 2 and layer 3 switching operations, and a plurality of network switch ports, each configured for connecting the network switch to a corresponding subnetwork. The switching module includes a plurality of address tables for storing address information (e.g., layer 2 and layer 3 address and switching information), where each table is configured for storing the address information of a corresponding one of the subnetworks. The use of multiple address tables within the switching module enables the time for looking up address information to be substantially reduced, especially since the multiple address tables can be accessed independently and simultaneously by the switching module.Type: GrantFiled: January 14, 2000Date of Patent: September 5, 2006Assignee: Advanced Micro Devices, Inc.Inventor: Mrudula Kanuri
-
Patent number: 7099285Abstract: A multiport switching device includes a configuration table that stores associations between addresses of subnets directly connected to the switching device and the port number of the multiport switching device that leads to the subnet. A host processor connected to the multiport switching device updates and maintains the configuration table. A remote processor communicates with the switching device through the host processor. To facilitate the communication of the remote processor with the multiport switch, the host processor executes a TCP/IP stack and the multiport switch is assigned a unique IP address.Type: GrantFiled: June 15, 2001Date of Patent: August 29, 2006Assignee: Advanced Micro Devices, Inc.Inventors: Mrudula Kanuri, Somnath Viswanath, Gopal S. Krishna
-
Patent number: 7079537Abstract: A network switch, configured for performing layer 2 and layer 3 switching in an Ethernet (IEEE 802.3) network without blocking of incoming data packets, includes a switching module for performing layer 2 and layer 3 (specifically Internet Protocol) switching operations, and a plurality of network switch ports, each configured for connecting the network switch to a corresponding subnetwork. The switching module includes address tables for storing address information (e.g., layer 2 and layer 3 address and switching information). The network switching module is configured for performing prescribed layer 3 switching that enables transfer of data packets between subnetworks, bypassing a router that normally would need to manage Internet protocol switching between subnetworks of the network. Hence, the network switch performs Internet Protocol switching for intranetwork (i.e., inter-subnetwork) traffic, improving efficiency of the router by enabling the router resources to support more subnetworks.Type: GrantFiled: April 25, 2000Date of Patent: July 18, 2006Assignee: Advanced Micro Devices, Inc.Inventors: Mrudula Kanuri, Chandan Egbert
-
Publication number: 20060083246Abstract: A system and method are provided for processing packets received via a network. In use, data packets and control packets are received via a network. Further, the data packets are processed in parallel with the control packets.Type: ApplicationFiled: October 19, 2004Publication date: April 20, 2006Inventors: John Minami, Robia Uyeshiro, Thien Ooi, Michael Johnson, Mrudula Kanuri
-
Patent number: 7002955Abstract: A network switch, configured for performing layer 2 and layer 3 switching in an Ethernet (IEEE 802.3) network without blocking of incoming data packets, includes a network switch port having a packet classifier module configured for evaluating an incoming data packet on an instantaneous basis. The packet classifier module performs simultaneous comparisons between the incoming data stream of the data packet and multiple templates configured for identifying respective data protocols. Each template is composed of a plurality of min terms, wherein each min term specifies a prescribed comparison operation within a selected data byte of the incoming data packet. Hence, the packet classifier module is able to monitor data flows between two network nodes interacting according to a prescribed network application.Type: GrantFiled: March 6, 2000Date of Patent: February 21, 2006Assignee: Advanced Micro Devices, Inc.Inventors: Mrudula Kanuri, Gopal Krishna