Patents by Inventor Peter Yifey Yan

Peter Yifey Yan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10042773
    Abstract: Systems and techniques for advance cache allocation are described. A described technique includes selecting a job from a plurality of jobs; selecting a processor core from a plurality of processor cores to execute the selected job; receiving a message which describes future memory accesses that will be generated by the selected job; generating a memory burst request based on the message; performing the memory burst request to load data from a memory to at least a dedicated portion of a cache, the cache corresponding to the selected processor core; and starting the selected job on the selected processor core. The technique can include performing an action indicated by a send message to write one or more values from another dedicated portion of the cache to the memory.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: August 7, 2018
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Sushma Wokhlu, Lee McFearin, Alan Gatherer, Ashish Shrivastava, Peter Yifey Yan
  • Publication number: 20170031829
    Abstract: Systems and techniques for advance cache allocation are described. A described technique includes selecting a job from a plurality of jobs; selecting a processor core from a plurality of processor cores to execute the selected job; receiving a message which describes future memory accesses that will be generated by the selected job; generating a memory burst request based on the message; performing the memory burst request to load data from a memory to at least a dedicated portion of a cache, the cache corresponding to the selected processor core; and starting the selected job on the selected processor core. The technique can include performing an action indicated by a send message to write one or more values from another dedicated portion of the cache to the memory.
    Type: Application
    Filed: July 28, 2015
    Publication date: February 2, 2017
    Inventors: Sushma Wokhlu, Lee McFearin, Alan Gatherer, Ashish Shrivastava, Peter Yifey Yan
  • Patent number: 7110411
    Abstract: A method and apparatus is provided for scheduling access to a common resource for a plurality of objects queued in a plurality of connection queues. Tokens associated with the connection queues are stored in scheduling queues. Each scheduling queue has a scheduling weight assigned thereto. Each connection queue has a connection weight value assigned thereto. A serving value is used to determine which scheduling queue to select. When a scheduling queue is selected, an object stored in a connection queue having an associated token stored in the selected scheduling queue is provided to the common resource. Tokens are moved among the scheduling queues as a function of the connection weight values, scheduling weights, and serving value. The objects queued in the connection queues may be fixed length cells or variable length packets.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: September 19, 2006
    Assignee: Erlang Technology, Inc.
    Inventors: Hossein Saidi, Chunhua Hu, Peter Yifey Yan, Paul Seungkyu Min
  • Patent number: 7106738
    Abstract: Disclosed herein is a system architecture capable of processing fixed length and/or variable length data packets. Under the method of the invention, incoming data packets are queued together according to their corresponding switch processing parameters (SPPs), and then the commonly-queued data packets are processed through a switch fabric as a single unit. In one aspect of the invention, the commonly-queued data packets are processed by the switch fabric as a single train packet. In another aspect of the invention, the commonly-queued data packets are sliced into a set of subtrain packets. A switch fabric then processes the set of subtrain packets in parallel using a plurality of switch planes. Both aspects of the invention can be implemented with a plurality of packet formatters and deformatters linked to a switch fabric in various configurations, including multi-path and hierarchical switching systems. a multichannel switching system.
    Type: Grant
    Filed: April 6, 2001
    Date of Patent: September 12, 2006
    Assignee: Erlang Technologies, Inc.
    Inventors: Hossein Saidi, Chunhua Hu, Peter Yifey Yan, Paul Seungkyu Min
  • Publication number: 20030179774
    Abstract: A method and apparatus is provided for scheduling access to a common resource for a plurality of objects queued in a plurality of connection queues. Tokens associated with the connection queues are stored in scheduling queues. Each scheduling queue has a scheduling weight assigned thereto. Each connection queue has a connection weight value assigned thereto. A serving value is used to determine which scheduling queue to select. When a scheduling queue is selected, an object stored in a connection queue having an associated token stored in the selected scheduling queue is provided to the common resource. Tokens are moved among the scheduling queues as a function of the connection weight values, scheduling weights, and serving value. The objects queued in the connection queues may be fixed length cells or variable length packets.
    Type: Application
    Filed: March 25, 2002
    Publication date: September 25, 2003
    Applicant: Erlang Technology, Inc.
    Inventors: Hossein Saidi, Chunhua Hu, Peter Yifey Yan, Paul Seungkyu Min
  • Publication number: 20020145974
    Abstract: Disclosed herein is a system architecture capable of processing fixed length and/or variable length data packets. Under the method of the invention, incoming data packets are queued together according to their corresponding switch processing parameters (SPPs), and then the commonly-queued data packets are processed through a switch fabric as a single unit. In one aspect of the invention, the commonly-queued data packets are processed by the switch fabric as a single train packet. In another aspect of the invention, the commonly-queued data packets are sliced into a set of subtrain packets. A switch fabric then processes the set of subtrain packets in parallel using a plurality of switch planes. Both aspects of the invention can be implemented with a plurality of packet formatters and deformatters linked to a switch fabric in various configurations, including multi-path and hierarchical switching systems. a multichannel switching system.
    Type: Application
    Filed: April 6, 2001
    Publication date: October 10, 2002
    Applicant: Erlang Technology, Inc.
    Inventors: Hossein Saidi, Chunhua Hu, Peter Yifey Yan, Paul Seungkyu Min