Patents by Inventor David P. Sonnier
David P. Sonnier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190007318Abstract: Technologies for inflight packet count limiting include a network device. The network device is to receive a packet from a producer application. The packet is configured to be enqueued into a packet queue as a queue element to be consumed by a consumer application. The network device is also to increment, in response to receipt of the packet, an inflight count variable, determine whether a value of the inflight count variable satisfies an inflight count limit, and enqueue, in response to a determination that the value of the inflight count variable satisfies the inflight count limit, the packet.Type: ApplicationFiled: June 30, 2017Publication date: January 3, 2019Inventors: Niall D. McDonnell, William Burroughs, Nitin N. Garegrat, David P. Sonnier
-
Patent number: 9218290Abstract: Described embodiments provide for storing data in a local cache of one of a plurality of processing modules of a network processor. A control processing module determines presence of data stored in its local cache while concurrently sending a request to read the data from a shared memory and from one or more local caches corresponding to other of the plurality of processing modules. Each of the plurality of processing modules responds whether the data is located in one or more corresponding local caches. The control processing module determines, based on the responses, presence of the data in the local caches corresponding to the other processing modules. If the data is present in one of the local caches corresponding to one of the other processing modules, the control processing module reads the data from the local cache containing the data and cancels the read request to the shared memory.Type: GrantFiled: July 27, 2011Date of Patent: December 22, 2015Assignee: Intel CorporationInventors: David P. Sonnier, David A. Brown, Charles Edward Peet, Jr.
-
Patent number: 9183145Abstract: Described embodiments provide a method of coherently storing data in a network processor having a plurality of processing modules and a shared memory. A control processor sends an atomic update request to a configuration controller. The atomic update request corresponds to data stored in the shared memory, the data also stored in a local pipeline cache corresponding to a client processing module. The configuration controller sends the atomic update request to the client processing modules. Each client processing module determines presence of an active access operation of a cache line in the local cache corresponding to the data of the atomic update request. If the active access operation of the cache line is absent, the client processing module writes the cache line from the local cache to shared memory, clears a valid indicator corresponding to the cache line and updates the data corresponding to the atomic update request.Type: GrantFiled: July 27, 2011Date of Patent: November 10, 2015Assignee: Intel CorporationInventors: David P. Sonnier, David A. Brown, Charles Edward Peet, Jr.
-
Patent number: 9154421Abstract: A network-based apparatus for imposing a minimum transmit latency on data packets of a prescribed data type on a network includes at least one processor. The processor is operative: (i) to receive a data packet of the prescribed data type; (ii) to determine an elapsed time since an arrival of the received data packet at the apparatus; (iii) when the elapsed time is equal to or greater than the minimum transmit latency, to transmit the data packet; and (iv) when the elapsed time is less than the minimum transmit latency, to wait an amount of time at least equal to a difference between the elapsed time and the minimum transmit latency and then to transmit the data packet. The apparatus further includes memory coupled to the processor, the memory being configurable for storing data utilized by the processor.Type: GrantFiled: May 30, 2006Date of Patent: October 6, 2015Assignee: Intel CorporationInventor: David P. Sonnier
-
Patent number: 9081742Abstract: Described embodiments provide a system having a plurality of processor cores and common memory in direct communication with the cores. A source processing core communicates with a task destination core by generating a task message for the task destination core. The task source core transmits the task message directly to a receiving processing core adjacent to the task source core. If the receiving processing core is not the task destination core, the receiving processing core passes the task message unchanged to a processing core adjacent the receiving processing core. If the receiving processing core is the task destination core, the task destination core processes the message.Type: GrantFiled: May 18, 2010Date of Patent: July 14, 2015Assignee: Intel CorporationInventors: David P. Sonnier, William G. Burroughs, Narender R. Vangati, Deepak Mital, Robert J. Munoz
-
Patent number: 8949578Abstract: Described embodiments provide a system having at least two network processors that each have a plurality of processing modules. The processing modules process a packet in a task pipeline by transmitting task messages to other processing modules on a task ring, the task messages related to desired processing of the packet. A series of tasks within a network processor may result in no processing or reduced processing for certain processing modules creating a virtual pipeline depending on the packet received by the network processor. At least two of the network processors communicate tasks. This communication allows ter the extension of the virtual pipeline of or IC network processor to at least two network processors.Type: GrantFiled: August 7, 2012Date of Patent: February 3, 2015Assignee: LSI CorporationInventors: Joseph A. Manzella, Nilesh S. Vora, Walter A. Roper, Robert J. Munoz, David P. Sonnier
-
Patent number: 8917738Abstract: Described embodiments provide a method of processing packets of a network processor. One or more tasks are generated corresponding to received packets associated with one or more data flows. A traffic manager receives a task corresponding to a data flow, the task provided by a processing module of the network processor. The traffic manager determines whether the received task corresponds to a unicast data flow or a multicast data flow. If the received task corresponds to a multicast data flow, the traffic manager determines, based on identifiers corresponding to the task, an address of launch data stored in launch data tables in a shared memory, and reads the launch data. Based on the identifiers and the read launch data, two or more output tasks are generated corresponding to the multicast data flow, and the two or more output tasks are added at the tail end of a scheduling queue.Type: GrantFiled: September 14, 2011Date of Patent: December 23, 2014Assignee: LSI CorporationInventors: Balakrishnan Sundararaman, Shailendra Aulakh, David P. Sonnier, Rachel Flood
-
Patent number: 8738977Abstract: In a system including a processor and memory coupled to the processor, a method of device failure analysis includes the steps of: upon each error detected within a test series performed on a device, the processor storing within a table in the memory an address at which the error occurred in the device and storing a bit position of each failed bit corresponding to that address; for each unique address at which at least one error occurred, determining how many different bit positions corresponding to the address failed during the test series; and based on results of the test series, determining whether the device failed the test series.Type: GrantFiled: August 31, 2006Date of Patent: May 27, 2014Assignee: Agere Systems LLCInventors: David A. Brown, James Thomas Kirk, David P. Sonnier, Chris R. Stone
-
Patent number: 8718040Abstract: An integrated circuit device for use in a line card of a network node of a digital networking system is provided. The integrated circuit device is capable of intercepting one or more control messages from at least one CPE device. The one or more control messages correspond to at least an operational status of at least one TE device associated with the at least one CPE device. The integrated circuit device is also capable of transmitting one or more rate control messages to a network processor of the network node to adapt bandwidth utilization and provide adapted data traffic flow to at least one CPE device in relation to the operational status of the at least one TE device.Type: GrantFiled: December 29, 2004Date of Patent: May 6, 2014Assignee: Agere Systems LLCInventors: Deepak Kataria, Seong-Hwan Kim, David P. Sonnier
-
Patent number: 8688853Abstract: A multicast group list (i.e., destination node address list) for a network device is circularly linked such that the list can be entered at any point and traversed back to the entry point. The list traversal is then terminated as the entire list has been processed. The data packet received at the network device for transmission to the multicast group can indicate the entry point, although there are other techniques for determining the entry point. The destination node address for the entry point is skipped, that is the multicast data packet is not transmitted to the entry point destination address.Type: GrantFiled: December 21, 2001Date of Patent: April 1, 2014Assignee: Agere Systems LLCInventors: David E. Clune, Hanan Z. Moller, David P. Sonnier
-
Patent number: 8407707Abstract: Described embodiments provide a method of assigning tasks to queues of a processing core. Tasks are assigned to a queue by sending, by a source processing core, a new task having a task identifier. A destination processing core receives the new task and determines whether another task having the same identifier exists in any of the queues corresponding to the destination processing core. If another task with the same identifier as the new task exists, the destination processing core assigns the new task to the queue containing a task with the same identifier as the new task. If no task with the same identifier as the new task exists in the queues, the destination processing core assigns the new task to the queue having the fewest tasks. The source processing core writes the new task to the assigned queue. The destination processing core executes the tasks in its queues.Type: GrantFiled: May 18, 2010Date of Patent: March 26, 2013Assignee: LSI CorporationInventors: David P. Sonnier, Balakrishnan Sundararaman, Shailendra Aulakh, Deepak Mital
-
Publication number: 20120300772Abstract: Described embodiments provide a system having at least two network processors that each have a plurality of processing modules. The processing modules process a packet in a task pipeline by transmitting task messages to other processing modules on a task ring, the task messages related to desired processing of the packet. A series of tasks within a network processor may result in no processing or reduced processing for certain processing modules creating a virtual pipeline depending on the packet received by the network processor. At least two of the network processors communicate tasks. This communication allows ter the extension of the virtual pipeline of or IC network processor to at least two network processors.Type: ApplicationFiled: August 7, 2012Publication date: November 29, 2012Inventors: Joseph A. Manzella, Nilesh S. Vora, Walter A. Roper, Robert J. Munoz, David P. Sonnier
-
Patent number: 8255644Abstract: Described embodiments provide a memory system including a plurality of addressable memory arrays. Data in the arrays is accessed by receiving a logical address of data in the addressable memory array and computing a hash value based on at least a part of the logical address. One of the addressable memory arrays is selected based on the hash value. Data in the selected addressable memory array is accessed using a physical address based on at least part of the logical address not used to compute the hash value. The hash value is generated by a hash function to provide essentially random selection of each of the addressable memory arrays.Type: GrantFiled: May 18, 2010Date of Patent: August 28, 2012Assignee: LSI CorporationInventors: David P. Sonnier, Michael R. Betker
-
Patent number: 8214868Abstract: Apparatus for distributing streaming multimedia to at least one end client over a network includes memory and at least one processor operatively connected to the memory. The processor is operative: (i) to receive the streaming multimedia from at least one multimedia source via at least one of a plurality of channels in the network; (ii) when a channel change request generated by the end client for changing a channel and corresponding multimedia content from the multimedia source is not detected, to deliver the at least one multimedia stream to the end client at a first data rate; and (iii) when the channel change request has been detected, to deliver the at least one multimedia stream to the end client at a second rate for a prescribed period of time after receiving the channel change request and, after the prescribed period of time, to deliver the at least one multimedia stream to the end client at the first data rate, wherein the second data rate is greater than the first data rate.Type: GrantFiled: April 21, 2006Date of Patent: July 3, 2012Assignee: Agere Systems Inc.Inventors: Christopher W. Hamilton, David P. Sonnier, Milan Zoranovic
-
Publication number: 20120002546Abstract: Described embodiments provide a method of processing packets of a network processor. One or more tasks are generated corresponding to received packets associated with one or more data flows. A traffic manager receives a task corresponding to a data flow, the task provided by a processing module of the network processor. The traffic manager determines whether the received task corresponds to a unicast data flow or a multicast data flow. If the received task corresponds to a multicast data flow, the traffic manager determines, based on identifiers corresponding to the task, an address of launch data stored in launch data tables in a shared memory, and reads the launch data. Based on the identifiers and the read launch data, two or more output tasks are generated corresponding to the multicast data flow, and the two or more output tasks are added at the tail end of a scheduling queue.Type: ApplicationFiled: September 14, 2011Publication date: January 5, 2012Inventors: Balakrishnan Sundararaman, Shailendra Aulakh, David P. Sonnier, Rachel Flood
-
Publication number: 20110289180Abstract: Described embodiments provide for storing data in a local cache of one of a plurality of processing modules of a network processor. A control processing module determines presence of data stored in its local cache while concurrently sending a request to read the data from a shared memory and from one or more local caches corresponding to other of the plurality of processing modules. Each of the plurality of processing modules responds whether the data is located in one or more corresponding local caches. The control processing module determines, based on the responses, presence of the data in the local caches corresponding to the other processing modules. If the data is present in one of the local caches corresponding to one of the other processing modules, the control processing module reads the data from the local cache containing the data and cancels the read request to the shared memory.Type: ApplicationFiled: July 27, 2011Publication date: November 24, 2011Inventors: David P. Sonnier, David A. Brown, Charles Edward Peet, JR.
-
Publication number: 20110289279Abstract: Described embodiments provide a method of coherently storing data in a network processor having a plurality of processing modules and a shared memory. A control processor sends an atomic update request to a configuration controller. The atomic update request corresponds to data stored in the shared memory, the data also stored in a local pipeline cache corresponding to a client processing module. The configuration controller sends the atomic update request to the client processing modules. Each client processing module determines presence of an active access operation of a cache line in the local cache corresponding to the data of the atomic update request. If the active access operation of the cache line is absent, the client processing module writes the cache line from the local cache to shared memory, clears a valid indicator corresponding to the cache line and updates the data corresponding to the atomic update request.Type: ApplicationFiled: July 27, 2011Publication date: November 24, 2011Inventors: David P. Sonnier, David A. Brown, Charles Edward Peet, JR.
-
Patent number: 7930691Abstract: Improved techniques are disclosed for performing an in-service upgrade of software associated with a network or packet processor. By way of example, a method of managing data structures associated with code executable on a packet processor includes the following steps. Data structures in the code are identified as being one of static data structures and non-static data structures, wherein a static data structure includes a data structure that is not changed during execution of the packet processor code and a non-static data structure includes a data structure that is changed during execution of the packet processor code. One or more data structures associated with the packet processor code are managed in a manner specific to the identification of the one or more data structures as static data structures or non-static data structures. At least a portion of the data structures may include tree structures.Type: GrantFiled: April 27, 2006Date of Patent: April 19, 2011Assignee: Agere Systems Inc.Inventors: Rajarshi Bhattacharya, David P. Sonnier, Narender Reddy Vangati
-
Patent number: 7912069Abstract: A virtual segmentation system and a method of operating the same. In one embodiment, the virtual segmentation system includes a protocol data unit receiver subsystem configured to (i) receive at least a portion of a protocol data unit and (ii) store the at least a portion of the protocol data unit in at least one block, and a virtual segmentation subsystem, associated with the protocol data unit receiver subsystem, configured to perform virtual segmentation on the protocol data unit by segmenting the at least one block when retrieved without reassembling an entirety of the protocol data unit.Type: GrantFiled: December 12, 2005Date of Patent: March 22, 2011Assignee: Agere Systems Inc.Inventors: David B. Kramer, David P. Sonnier
-
Publication number: 20100293345Abstract: Described embodiments provide a memory system including a plurality of addressable memory arrays. Data in the arrays is accessed by receiving a logical address of data in the addressable memory array and computing a hash value based on at least a part of the logical address. One of the addressable memory arrays is selected based on the hash value. Data in the selected addressable memory array is accessed using a physical address based on at least part of the logical address not used to compute the hash value. The hash value is generated by a hash function to provide essentially random selection of each of the addressable memory arrays.Type: ApplicationFiled: May 18, 2010Publication date: November 18, 2010Inventors: David P. Sonnier, Michael R. Betker