Patents by Inventor Narender Vangati

Narender Vangati has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10929323
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Publication number: 20200042479
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Applicant: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Patent number: 10445271
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Yipeng Wang, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson
  • Patent number: 10216668
    Abstract: Technologies for a distributed hardware queue manager include a compute device having a processor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: February 26, 2019
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Jr-Shian Tsai, Andrew Herdrich, Tsung-Yuan Tai, Niall McDonnell, Stephen Van Doren, David Sonnier, Debra Bernstein, Hugh Wilkinson, Narender Vangati, Stephen Miller, Gage Eads, Andrew Cunningham, Jonathan Kenny, Bruce Richardson, William Burroughs, Joseph Hasting, An Yan, James Clee, Te Ma, Jerry Pirog, Jamison Whitesell
  • Publication number: 20170286337
    Abstract: Technologies for a distributed hardware queue manager include a compute device having a procesor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Ren Wang, Yipeng Wang, Jr-Shian Tsai, Andrew Herdrich, Tsung-Yuan Tai, Niall McDonnell, Stephen Van Doren, David Sonnier, Debra Bernstein, Hugh Wilkinson, Narender Vangati, Stephen Miller, Gage Eads, Andrew Cunningham, Jonathan Kenny, Bruce Richardson, William Burroughs, Joseph Hasting, An Yan, James Clee, Te Ma, Jerry Pirog, Jamison Whitesell
  • Publication number: 20170192921
    Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
    Type: Application
    Filed: January 4, 2016
    Publication date: July 6, 2017
    Inventors: Ren Wang, Yipeng Wang, Andrew J. Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
  • Patent number: 8868889
    Abstract: Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.
    Type: Grant
    Filed: December 22, 2010
    Date of Patent: October 21, 2014
    Assignee: LSI Corporation
    Inventors: Deepak Mital, Te Khac Ma, Narender Vangati, William Burroughs
  • Patent number: 8505013
    Abstract: Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.
    Type: Grant
    Filed: December 22, 2010
    Date of Patent: August 6, 2013
    Assignee: LSI Corporation
    Inventors: Steven Pollock, William Burroughs, Deepak Mital, Te Khac Ma, Narender Vangati, Larry King
  • Publication number: 20110225394
    Abstract: Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.
    Type: Application
    Filed: December 22, 2010
    Publication date: September 15, 2011
    Inventors: Deepak Mital, Te Khac Ma, Narender Vangati, William Burroughs
  • Publication number: 20110225588
    Abstract: Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.
    Type: Application
    Filed: December 22, 2010
    Publication date: September 15, 2011
    Inventors: Steven Pollock, William Burroughs, Deepak Mital, Te Khac Ma, Narender Vangati, Larry King
  • Publication number: 20070276850
    Abstract: Improved techniques are disclosed for performing an in-service upgrade of software associated with a network or packet processor. By way of example, a method of managing data structures associated with code executable on a packet processor includes the following steps. Data structures in the code are identified as being one of static data structures and non-static data structures, wherein a static data structure includes a data structure that is not changed during execution of the packet processor code and a non-static data structure includes a data structure that is changed during execution of the packet processor code. One or more data structures associated with the packet processor code are managed in a manner specific to the identification of the one or more data structures as static data structures or non-static data structures. At least a portion of the data structures may include tree structures.
    Type: Application
    Filed: April 27, 2006
    Publication date: November 29, 2007
    Inventors: Rajarshi Bhattacharya, David Sonnier, Narender Vangati
  • Publication number: 20070255764
    Abstract: Improved techniques are disclosed for performing an in-service upgrade of software associated with a network or packet processor. By way of example, a method of performing an in-service upgrade of code, storable in a memory associated with a packet processor and executable on the packet processor, from a first code version to a second code version, includes the following steps. A first step includes preparing for the upgrade by generating one or more write operations to effectuate the code upgrade from the first code version to the second code version. A second step includes updating the code from the first code version to the second code version by propagating the one or more write operations to the packet processor. A third step includes cleaning up after the updating step by reclaiming one or more memory locations available after the update step. As such, the storage of only a single version of the code in the memory associated with the packet processor is required.
    Type: Application
    Filed: April 27, 2006
    Publication date: November 1, 2007
    Inventors: David Sonnier, Narender Vangati
  • Publication number: 20050114657
    Abstract: Techniques are disclosed for generating a representation of an access control list, the representation being utilizable in a network processor or other type of processor to perform packet filtering or other type of access control list based function. A plurality of rules of the access-control list are determined, each of at least a subset of the rules having a plurality of fields and a corresponding action, and the rules are processed to generate a multi-level tree representation of the access control list, in which each of one or more of the levels of the tree representation is associated with a corresponding one of the fields. At least one level of the tree representation other than a root level of the tree representation comprises a plurality of nodes, with at least two of the nodes at that level each having a separate matching table associated therewith.
    Type: Application
    Filed: November 26, 2003
    Publication date: May 26, 2005
    Inventors: Vinoj Kumar, Narender Vangati
  • Publication number: 20050114655
    Abstract: Techniques are disclosed for generating a representation of an access control list, the representation being utilizable in a network processor or other type of processor to perform packet filtering or other type of access control list based function. A plurality of rules of the access control list are determined, each of at least a subset of the rules having a plurality of fields and a corresponding action. The rules are processed to generate a multi-level tree representation of the access control list, in which each of one or more of the levels of the tree representation is associated with a corresponding one of the fields. At least one level of the tree representation comprises a plurality of nodes, with two or more of the nodes of that level having a common subtree, and the tree representation including only a single copy of that subtree. The tree representation is characterizable as a directed graph in which each of the two nodes having the common subtree points to the single copy of the common subtree.
    Type: Application
    Filed: November 26, 2003
    Publication date: May 26, 2005
    Inventors: Stephen Miller, Narender Vangati