Patents by Inventor Steven Pollock
Steven Pollock has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230401109Abstract: Examples described herein relate to a load balancer that is configured to selectively perform ordering of requests from the one or more cores, allocate the requests into queue elements prior to allocation to one or more receiver cores of the one or more cores to process the requests, and perform two or more operations of: adjust a number of queues associated with a core of the one or more cores by changing a number of consumer queues (CQs) allocated to a single domain, adjust a number of target cores in a group of target cores to be load balanced, and order memory space writes from multiple caching agents (CAs).Type: ApplicationFiled: August 24, 2023Publication date: December 14, 2023Inventors: Niall D. MCDONNELL, Ambalavanar ARULAMBALAM, Te Khac MA, Surekha PERI, Pravin PATHAK, James CLEE, An YAN, Steven POLLOCK, Bruce RICHARDSON, Vijaya Bhaskar KOMMINENI, Abhinandan GUJJAR
-
Patent number: 8677075Abstract: Described embodiments provide a network processor having a plurality of processing modules coupled to a system cache and a shared memory. A memory manager allocates blocks of the shared memory to a requesting one of the processing modules. The allocated blocks store data corresponding to packets received by the network processor. The memory manager maintains a reference count for each allocated memory block indicating a number of processing modules accessing the block. One of the processing modules reads the data stored in the allocated memory blocks, stores the read data to corresponding entries of the system cache and operates on the data stored in the system cache. Upon completion of operation on the data, the processing module requests to decrement the reference count of each memory block. Based on the reference count, the memory manager invalidates the entries of the system cache and deallocates the memory blocks.Type: GrantFiled: January 27, 2012Date of Patent: March 18, 2014Assignee: LSI CorporationInventors: Deepak Mital, William Burroughs, David Sonnier, Steven Pollock, David Brown, Joseph Hasting
-
Patent number: 8505013Abstract: Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.Type: GrantFiled: December 22, 2010Date of Patent: August 6, 2013Assignee: LSI CorporationInventors: Steven Pollock, William Burroughs, Deepak Mital, Te Khac Ma, Narender Vangati, Larry King
-
Patent number: 8352669Abstract: Described embodiments provide for transfer of data between data modules. At least two crossbar switches are employed, where input nodes and output nodes of each crossbar switch are coupled to corresponding data modules. The ith crossbar switch has an Ni-input by Mi-output switch fabric, wherein Ni and Mi are positive integers greater than one. Each crossbar switch includes an input buffer at each input node, a crosspoint buffer at each crosspoint of the switch fabric, and an output buffer at each output node. The input buffer has an arbiter that reads data packets from the input buffer according to a first scheduling algorithm. An arbiter reads data packets from a crosspoint buffer queue according to a second scheduling algorithm. The output node receives segments of data packets provided from one or more corresponding crosspoint buffers.Type: GrantFiled: April 27, 2009Date of Patent: January 8, 2013Assignee: LSI CorporationInventors: Ephrem Wu, Ting Zhou, Steven Pollock
-
Publication number: 20120131283Abstract: Described embodiments provide a network processor having a plurality of processing modules coupled to a system cache and a shared memory. A memory manager allocates blocks of the shared memory to a requesting one of the processing modules. The allocated blocks store data corresponding to packets received by the network processor. The memory manager maintains a reference count for each allocated memory block indicating a number of processing modules accessing the block. One of the processing modules reads the data stored in the allocated memory blocks, stores the read data to corresponding entries of the system cache and operates on the data stored in the system cache. Upon completion of operation on the data, the processing module requests to decrement the reference count of each memory block. Based on the reference count, the memory manager invalidates the entries of the system cache and deallocates the memory blocks.Type: ApplicationFiled: January 27, 2012Publication date: May 24, 2012Inventors: Deepak Mital, William Burroughs, David Sonnier, Steven Pollock, David Brown, Joseph Hasting
-
Publication number: 20110225588Abstract: Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.Type: ApplicationFiled: December 22, 2010Publication date: September 15, 2011Inventors: Steven Pollock, William Burroughs, Deepak Mital, Te Khac Ma, Narender Vangati, Larry King
-
Patent number: 7709744Abstract: Venting for component mounting pads of surface mount circuit boards allows the escape of gases from the junction between an electrical component and its associated mounting pad during soldering and facilitates a more complete and effective solder joint between the component base and pad. The venting may be accomplished by either one or more through holes in the board through the pads to allow undesirable gases to escape to the underside of the board, or by one or more solder free channels formed in the pad to allow the gases to escape through the periphery of the pad.Type: GrantFiled: March 30, 2007Date of Patent: May 4, 2010Assignee: Intel CorporationInventors: Richard C. Schaefer, Steven Pollock, Charles M. Bailley, Mike Lowe, Andrew J. Balk, John G. Oldendorf
-
Publication number: 20080236871Abstract: Venting for component mounting pads of surface mount circuit boards allows the escape of gases from the junction between an electrical component and its associated mounting pad during soldering and facilitates a more complete and effective solder joint between the component base and pad. The venting may be accomplished by either one or more through holes in the board through the pads to allow undesirable gases to escape to the underside of the board, or by one or more solder free channels formed in the pad to allow the gases to escape through the periphery of the pad.Type: ApplicationFiled: March 30, 2007Publication date: October 2, 2008Applicant: INTEL CORPORATIONInventors: Richard C. Schaefer, Steven Pollock, Charles M. Bailley, Mike Lowe, Andrew J. Balk, John G. Oldendorf
-
Publication number: 20070241800Abstract: A programmable delay circuit includes a plurality of delay blocks, a plurality of corresponding tri-state drivers and at least one decoder. The delay blocks are connected together so as to form a series chain. Each of the tri-state drivers includes an input connected to an output of a corresponding one of the delay blocks, and a control input adapted to receive one of multiple control signals. The tri-state driver is operative in one of at least a first mode and a second mode as a function of a corresponding one of the control signals. In the first mode, an output signal generated at an output of the tri-state driver is a function of a voltage level at the input of the tri-state driver, and in the second mode the output of the tri-state driver is in a high-impedance state. The output of each of the tri-state drivers is coupled together and forms an output of the programmable delay circuit. The decoder is connected to the plurality of tri-state drivers.Type: ApplicationFiled: April 18, 2006Publication date: October 18, 2007Inventor: Steven Pollock
-
Publication number: 20070162280Abstract: In one embodiment, the invention provides a method for building a voice response system. The method comprises developing voice content for the voice response system, the voice content including prompts and information to be played to a user; and integrating the voice content with logic to define a voice user-interface that is capable of interacting with the user in a manner of a conversation in which the voice user-interface receives an utterance from the user and presents a selection of the voice content to the user in response to the utterance.Type: ApplicationFiled: November 17, 2006Publication date: July 12, 2007Inventors: Ashok Khosla, Steven Pollock