Patents by Inventor Heeloo Chung

Heeloo Chung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11687144
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Grant
    Filed: February 11, 2022
    Date of Patent: June 27, 2023
    Assignee: Marvell Asia Pte Ltd
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Publication number: 20220164018
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Application
    Filed: February 11, 2022
    Publication date: May 26, 2022
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Patent number: 11287869
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: March 29, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Publication number: 20210341988
    Abstract: A new approach contemplates systems and methods to support control of power consumption of a memory on a chip by throttling port access requests to the memory via a memory arbiter based on a one or more programmable parameters. The memory arbiter is configured to restrict the number of ports being used to access the memory at the same time to be less than the available ports of the memory, thereby enabling adaptive power control of the chip. Two port throttling schemes are enabled—strict port throttling, which throttles the number of ports granted for memory access to be no more than a user-configured maximum throttle port number, and leaky bucket port throttling, which throttles the number of ports granted for the memory access down to be within a range based on a number of credit tokens maintained in a credit register.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Inventors: Heeloo Chung, Sowmya Hotha, Saurabh Shrivastava, Chia-Hsin Chen
  • Patent number: 10291540
    Abstract: A computer-implemented medium using a scheduler for processing requests by receiving packet data from multiple source ports and then classifying, the received packet data based upon the source port received and a destination port the data being sent. Next, sorting, the classified packet data into multiple queues in a buffer, and updating, a static component of one or more of the multiple queues upon the queue receiving the sorted classified data packet. Further, scheduling, using the scheduler based upon the destination port availability and a set of fairness factors including priority weights and positions, for selecting a dequeuing of data packets from a set of corresponding queues of the multiple queues, and then updating the static of the dequeued queue upon the data packet being outputted from the dequeued queue.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: May 14, 2019
    Assignee: Cavium, LLC
    Inventors: Vamsi Panchagnula, Heeloo Chung
  • Publication number: 20160142333
    Abstract: A computer-implemented medium using a scheduler for processing requests by receiving packet data from multiple source ports and then classifying, the received packet data based upon the source port received and a destination port the data being sent. Next, sorting, the classified packet data into multiple queues in a buffer, and updating, a static component of one or more of the multiple queues upon the queue receiving the sorted classified data packet. Further, scheduling, using the scheduler based upon the destination port availability and a set of fairness factors including priority weights and positions, for selecting a dequeuing of data packets from a set of corresponding queues of the multiple queues, and then updating the static of the dequeued queue upon the data packet being outputted from the dequeued queue.
    Type: Application
    Filed: November 14, 2014
    Publication date: May 19, 2016
    Inventors: Vamsi PANCHAGNULA, Heeloo CHUNG
  • Patent number: 9276835
    Abstract: An epoch-based network processor internally segments packets for processing and aggregation in epoch payloads. FIFO buffers interact with a memory management unit to efficiently manage the segmentation and aggregation process.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: March 1, 2016
    Assignee: Force10 Networks, Inc.
    Inventors: Glenn Poole, Brad Danofsky, David Haddad, Ann Gui, Heeloo Chung, Joanna Lin
  • Publication number: 20150372896
    Abstract: An epoch-based network processor internally segments packets for processing and aggregation in epoch payloads. FIFO buffers interact with a memory management unit to efficiently manage the segmentation and aggregation process.
    Type: Application
    Filed: September 1, 2015
    Publication date: December 24, 2015
    Inventors: Glenn POOLE, Brad DANOFSKY, David HADDAD, Ann GUI, Heeloo CHUNG, Joanna LIN
  • Patent number: 9160677
    Abstract: A network packet is segmented for transfer through a switch fabric. The last segment of the packet is allowed to exceed the maximum size of previous segments so as to increase the switch fabric utilization. Other features are also provided.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: October 13, 2015
    Assignee: Force10 Networks, Inc.
    Inventors: Glenn Poole, Brad Danofsky, David Haddad, Ann Gui, Heeloo Chung, Joanna Lin
  • Publication number: 20140321281
    Abstract: An epoch-based network processor internally segments packets for processing and aggregation in epoch payloads. FIFO buffers interact with a memory management unit to efficiently manage the segmentation and aggregation process.
    Type: Application
    Filed: July 8, 2014
    Publication date: October 30, 2014
    Inventors: Glenn POOLE, Brad DANOFSKY, David HADDAD, Ann GUI, Heeloo CHUNG, Joanna LIN
  • Patent number: 8804751
    Abstract: An epoch-based network processor internally segments packets for processing and aggregation in epoch payloads. FIFO buffers interact with a memory management unit to efficiently manage the segmentation and aggregation process.
    Type: Grant
    Filed: October 2, 2006
    Date of Patent: August 12, 2014
    Assignee: Force10 Networks, Inc.
    Inventors: Glenn Poole, Brad Danofsky, David Haddad, Ann Gui, Heeloo Chung, Joanna Lin
  • Patent number: 7843830
    Abstract: Apparatus and methods for epoch retransmission in a packet network device are described. In at least one embodiment, epoch receivers check received epoch data for errors. When an error is detected, a receiver is allowed to request that the entire epoch be retransmitted. All epoch senders retain transmitted epoch data until the time for requesting a retransmission of that data is past. If retransmission is requested by any receiver, the epoch is “replayed.” This approach mitigates the problem of dropping multiple packets (bundled in a large epoch) due to an intraswitch error with the epoch. Other embodiments are also described and claimed.
    Type: Grant
    Filed: May 5, 2005
    Date of Patent: November 30, 2010
    Assignee: Force 10 Networks, Inc
    Inventors: Krishnamurthy Subramanian, Heeloo Chung, Glenn Poole
  • Patent number: 7346067
    Abstract: A network processing device stores and aligns data received from an input port prior to forwarding the data to an output port. Data packets arrive at various input ports already having an output queue or virtual output queue assigned. A buffer manager groups one or more packets destined for the same output queue into blocks, and stores the blocks in a buffer memory. A linked list is created of the trunks, which is an ordered collection of blocks. The trunks are sent to a high speed second memory and stored together as a unit. In some embodiments the trunks are split on boundaries and stored in a high speed memory. Once the trunks are stored in the high speed second memory, the corresponding data is erased from the write combine buffer memory and the pointers that made up the linked list are returned to a free block pointer pool. The data can then be read from the high speed second memory very quickly, passed through a switching fabric, and placed back on the computer network for its next destination.
    Type: Grant
    Filed: November 16, 2001
    Date of Patent: March 18, 2008
    Assignee: Force 10 Networks, Inc.
    Inventors: Heeloo Chung, Eugene Lee
  • Patent number: 6975638
    Abstract: Methods and apparatus for interleaved weighted fair data packet queue sequencing are disclosed. An interleaving table specifies a queue sequence. A queue sequencer follows the table order on an epoch-by-epoch basis, selecting a queue for each epoch based on the table order. If the selected queue does not have enough data to fill its epoch, the sequencer can step to the next queue in the table order. Because the table is interleaved, higher-priority queues can be visited frequently, improving jitter and latency for packets associated with these queues. The table structure allows all queues at least some portion of the available output bandwidth, and can be organized to afford some queues a much larger portion without having those queues monopolize the output stream for inordinate amounts of time. In some embodiments, each table entry has a programmable epoch value associated with it. The epoch value can be used to weight each table entry respective to the other entries.
    Type: Grant
    Filed: October 13, 2000
    Date of Patent: December 13, 2005
    Assignee: Force10 Networks, Inc.
    Inventors: Yao-Min Chen, Heeloo Chung, Zhijun Tong, Eugene Lee
  • Patent number: 6904015
    Abstract: Methods and apparatus for an improvement on Random Early Detection (RED) router congestion avoidance are disclosed. A traffic conditioner stores a drop probability profile as a collection of configurable profile segments. A multi-stage comparator compares an average queue size (AQS) for a packet queue to the segments, and determines which segment the AQS lies within. This segment is keyed to a corresponding drop probability, which is used to make a packet discard/admit decision for a packet. In a preferred implementation, this computational core is surrounded by a set of registers, allowing it to serve multiple packet queues and packets with different discard priorities. Each queue and discard priority can be keyed to a drop probability profile selected from a pool of such profiles. This provides a highly-configurable, inexpensive, and fast RED solution for a high-performance router.
    Type: Grant
    Filed: September 1, 2000
    Date of Patent: June 7, 2005
    Assignee: Force10 Networks, Inc.
    Inventors: Yao-Min Chen, Heeloo Chung
  • Publication number: 20030095558
    Abstract: A network processing device stores and aligns data received from an input port prior to forwarding the data to an output port. Data packets arrive at various input ports already having an output queue or virtual output queue assigned. A buffer manager groups one or more packets destined for the same output queue into blocks, and stores the blocks in a buffer memory. A linked list is created of the trunks, which is an ordered collection of blocks. The trunks are sent to a high speed second memory and stored together as a unit. In some embodiments the trunks are split on boundaries and stored in a high speed memory. Once the trunks are stored in the high speed second memory, the corresponding data is erased from the write combine buffer memory and the pointers that made up the linked list are returned to a free block pointer pool. The data can then be read from the high speed second memory very quickly, passed through a switching fabric, and placed back on the computer network for its next destination.
    Type: Application
    Filed: November 16, 2001
    Publication date: May 22, 2003
    Applicant: Force 10 Networks, Inc.
    Inventors: Heeloo Chung, Eugene Lee