Patents by Inventor Jeen-Yuan Miin

Jeen-Yuan Miin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8087024
    Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: December 27, 2011
    Assignee: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Wilson Y. Liao, Prashant R. Chandra, Jeen-Yuan Miin, Yim Pun
  • Patent number: 7536692
    Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.
    Type: Grant
    Filed: November 6, 2003
    Date of Patent: May 19, 2009
    Assignee: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Wilson Y. Liao, Prashant R. Chandra, Jeen-Yuan Miin, Yim Pun
  • Publication number: 20090089546
    Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.
    Type: Application
    Filed: November 18, 2008
    Publication date: April 2, 2009
    Applicant: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Wilson Y. Liao, Prashant R. Chandra, Jeen-Yuan Miin, Yim Pun
  • Patent number: 7337371
    Abstract: Methods, software and systems for handling parity errors in flow control channels are presented. A network processor is provided having a flow control message First In First Out (FIFO) buffer and wherein the FIFO buffer includes a parity field. The network processor is included as either or both of an Ingress network processor and an Egress network processor and is used within a CSIX system or an NPSI NPE system.
    Type: Grant
    Filed: December 30, 2003
    Date of Patent: February 26, 2008
    Assignee: Intel Corporation
    Inventors: Chen-Chi Kuo, Sridhar Lakshmanamurthy, Jeen-Yuan Miin, Raymond Ng
  • Patent number: 7275145
    Abstract: According to some embodiments, a processing element includes (i) a next neighbor register to receive information directly from a previous processing element in a series of processing elements, and (ii) a previous neighbor register to receive information directly from a next processing element in the series.
    Type: Grant
    Filed: December 24, 2003
    Date of Patent: September 25, 2007
    Assignee: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Prashant Chandra, Wilson Y. Liao, Jeen-Yuan Miin, Pun Yim, Chen-Chi Kuo, Jaroslaw J. Sydir
  • Publication number: 20060112206
    Abstract: A scalable, high-performance interconnect scheme for a multi-threaded, multi-processing system-on-a-chip network processor unit. An apparatus implementing the technique includes a plurality of masters configured in a plurality of clusters, a plurality of targets, and a chassis interconnect that may be controlled to selectively connects a given master to a given target. In one embodiment, the chassis interconnect comprises a plurality of sets of bus lines connected between the plurality of clusters and the plurality of targets forming a cross-bar interconnect, including sets of bus lines corresponding to a command bus, a pull data bus for target writes, and a push data bus for target reads. Multiplexer circuitry for each of the command bus, pull data bus, and push data bus is employed to selectively connect a given cluster to a given target to enable commands and data to be passed between the given cluster and the given target.
    Type: Application
    Filed: November 23, 2004
    Publication date: May 25, 2006
    Inventors: Sridhar Lakshmanamurthy, Mark Rosenbluth, Matthew Adiletta, Jeen-Yuan Miin, Bijoy Bose
  • Publication number: 20050149691
    Abstract: According to some embodiments, a processing element includes (i) a next neighbor register to receive information directly from a previous processing element in a series of processing elements, and (ii) a previous neighbor register to receive information directly from a next processing element in the series.
    Type: Application
    Filed: December 24, 2003
    Publication date: July 7, 2005
    Inventors: Sridhar Lakshmanamurthy, Prashant Chandra, Wilson Liao, Jeen-Yuan Miin, Pun Yim, Chen-Chi Kuo, Jaroslaw Sydir
  • Publication number: 20050141535
    Abstract: Methods, software and systems for handling parity errors in flow control channels are presented. A network processor is provided having a flow control message First In First Out (FIFO) buffer and wherein the FIFO buffer includes a parity field. The network processor is included as either or both of an Ingress network processor and an Egress network processor and is used within a CSIX system or an NPSI NPE system.
    Type: Application
    Filed: December 30, 2003
    Publication date: June 30, 2005
    Inventors: Chen-Chi Kuo, Sridhar Lakshmanamurthy, Jeen-Yuan Miin, Raymond Ng
  • Publication number: 20050108479
    Abstract: In general, in one aspect, the disclosure describes a processor that includes a memory to store at least a portion of instructions of at least one program and multiple packet engines that include an engine instruction cache to store a subset of the at least one program. The processor also includes circuitry coupled to the packet engines and the memory to receive requests from the multiple engines for subsets of the at least one portion of the at least one set of instructions.
    Type: Application
    Filed: November 6, 2003
    Publication date: May 19, 2005
    Inventors: Sridhar Lakshmanamurthy, Wilson Liao, Prashant Chandra, Jeen-Yuan Miin, Yim Pun
  • Publication number: 20050102474
    Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and a set of multiple engines coupled to the instruction store. The engines include an engine instruction cache and circuitry to request a subset of the at least the portion of the at least one program.
    Type: Application
    Filed: November 6, 2003
    Publication date: May 12, 2005
    Inventors: Sridhar Lakshmanamurthy, Wilson Liao, Prashant Chandra, Jeen-Yuan Miin, Yim Pun
  • Publication number: 20050102486
    Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.
    Type: Application
    Filed: November 6, 2003
    Publication date: May 12, 2005
    Inventors: Sridhar Lakshmanamurthy, Wilson Liao, Prashant Chandra, Jeen-Yuan Miin, Yim Pun
  • Publication number: 20050050306
    Abstract: A method of executing instructions on a processor includes, receiving a first condition code produced by executing a first instruction during a first clock cycle on an array of engines included in the processor, receiving a second condition code produced by executing a second instruction during a second clock cycle on the array of engines included in the processor, and executing a logical operator on the first and second condition codes during the second clock cycle on the array of engines included in the processor.
    Type: Application
    Filed: August 26, 2003
    Publication date: March 3, 2005
    Inventors: Sridhar Lakshmanamurthy, Prashant Chandra, Wilson Liao, Jeen-Yuan Miin, Yim Pun, Chen-Chi Kuo, Jaroslaw Sydir, Uday Naik
  • Publication number: 20040252687
    Abstract: A method executed in a computing device for scheduling data packet transfer, the method includes receiving a first and second bit, the first bit indicates if a first digital device is ready to transfer a first data packet, the second bit indicates if a second digital device is ready to transfer a second data packet, receiving a binary number that identifies the first bit, determining the first digital device is ready to transfer the first data packet based on the binary number identifying the first bit, and incrementing the binary number to identify the second bit.
    Type: Application
    Filed: June 16, 2003
    Publication date: December 16, 2004
    Inventors: Sridhar Lakshmanamurthy, Prashant R. Chandra, Wilson Y. Liao, Jeen-Yuan Miin, Yim Pun, Chen-Chi Kuo, Jaroslaw J. Sydir
  • Publication number: 20040006724
    Abstract: Embodiments described herein provide a system and method that advantageously reduces the number of internal signals required to monitor the performance of a network processor. A plurality of events may be selected from a predetermined number of design unit events, and a plurality of signals may be selected from a predetermined number of design unit signals. A plurality of counters may be associated with the plurality of signals, and for each of the plurality of signals, a number of event occurrences may be counted and sent to a processor unit.
    Type: Application
    Filed: July 5, 2002
    Publication date: January 8, 2004
    Applicant: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Mark B. Rosenbluth, Jeen-Yuan Miin