Patents by Inventor Jinqlih Sang
Jinqlih Sang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10341260Abstract: A network device, such as a network switch, can include an ingress to receive data packets from a network. The ingress can communicate with an egress included in the network device though a fabric included in the network device. At least one of ingress and the egress can enqueue a data packet prior to receipt of all cells of the data packet. The ingress can also commence with dequeue of the cells of the received data packet prior to receipt of the entire data packet from the network. At least one of ingress and the egress can process the data packets using cut-through processing and store-and-forward processing. In a case of cut-through processing of a data packet at both the ingress and the egress of a network device, such as CIOQ switch, the fabric can be allocated to provide a prioritized virtual channel through the fabric for the data packet.Type: GrantFiled: February 2, 2018Date of Patent: July 2, 2019Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITEDInventors: Kandasamy Aravinthan, Rahul Durve, Manoj Lakshmy Gopalakrishnan, Jinqlih Sang, David Lucius Chen
-
Publication number: 20180227247Abstract: A network device, such as a network switch, can include an ingress to receive data packets from a network. The ingress can communicate with an egress included in the network device though a fabric included in the network device. At least one of ingress and the egress can enqueue a data packet prior to receipt of all cells of the data packet. The ingress can also commence with dequeue of the cells of the received data packet prior to receipt of the entire data packet from the network. At least one of ingress and the egress can process the data packets using cut-through processing and store-and-forward processing. In a case of cut-through processing of a data packet at both the ingress and the egress of a network device, such as CIOQ switch, the fabric can be allocated to provide a prioritized virtual channel through the fabric for the data packet.Type: ApplicationFiled: February 2, 2018Publication date: August 9, 2018Applicant: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.Inventors: Kandasamy Aravinthan, Rahul Durve, Manoj Lakshmy Gopalakrishnan, Jinqlih Sang, David Lucius Chen
-
Patent number: 9894013Abstract: A network device, such as a network switch, can include an ingress to receive data packets from a network. The ingress can communicate with an egress included in the network device though a fabric included in the network device. At least one of ingress and the egress can enqueue a data packet prior to receipt of all cells of the data packet. The ingress can also commence with dequeue of the cells of the received data packet prior to receipt of the entire data packet from the network. At least one of ingress and the egress can process the data packets using cut-through processing and store-and-forward processing. In a case of cut-through processing of a data packet at both the ingress and the egress of a network device, such as CIOQ switch, the fabric can be allocated to provide a prioritized virtual channel through the fabric for the data packet.Type: GrantFiled: March 3, 2015Date of Patent: February 13, 2018Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Kandasamy Aravinthan, Rahul Durve, Manoj Lakshmy Gopalakrishnan, Jinqlih Sang, David Lucius Chen
-
Publication number: 20160226797Abstract: A network device, such as a network switch, can include an ingress to receive data packets from a network. The ingress can communicate with an egress included in the network device though a fabric included in the network device. At least one of ingress and the egress can enqueue a data packet prior to receipt of all cells of the data packet. The ingress can also commence with dequeue of the cells of the received data packet prior to receipt of the entire data packet from the network. At least one of ingress and the egress can process the data packets using cut-through processing and store-and-forward processing. In a case of cut-through processing of a data packet at both the ingress and the egress of a network device, such as CIOQ switch, the fabric can be allocated to provide a prioritized virtual channel through the fabric for the data packet.Type: ApplicationFiled: March 3, 2015Publication date: August 4, 2016Inventors: Kandasamy Aravinthan, Rahul Durve, Manoj Lakshmy Gopalakrishnan, Jinqlih Sang, David Lucius Chen
-
Patent number: 6760338Abstract: Multiple network switch modules have memory interfaces configured for transferring packet data to respective buffer memories. The memory interfaces are also configured for transfer among each other data units of data frames received from different network switch modules. A shared switching logic monitors (“snoops”) the data units as they are transferred between network switch modules, providing a centralized switching decision logic for multiple network switch modules. The memory interfaces transfer the data units according to a prescribed sequence, optimizing memory bandwidth by concurrently executing a prescribed number of successive memory writes or memory reads. A preferred embodiment includes a distributed memory interface in between the network switch modules and a shared memory system.Type: GrantFiled: May 2, 2000Date of Patent: July 6, 2004Assignee: Advanced Micro Devices, Inc.Inventors: Shashank Merchant, Jinqlih Sang
-
Patent number: 6724769Abstract: Multiple network switch modules have memory interfaces configured for transferring packet data to respective buffer memories. The memory interfaces are also configured for transfer among each other data units of data frames received from different network switch modules. The memory interfaces transfer the data units according to a prescribed sequence, optimizing memory bandwidth by concurrently executing a prescribed number of successive memory writes or memory reads. An alternative embodiment includes a distributed memory interface in between the network switch modules and a shared memory system, where the width of the data bus of the shared width memory system equals the total number of bits on the data buses of the switch modules.Type: GrantFiled: September 23, 1999Date of Patent: April 20, 2004Assignee: Advanced Micro Devices, Inc.Inventor: Jinqlih Sang
-
Patent number: 6577636Abstract: A network switch configured for switching data packets across multiple ports uses an external memory to store data frames. When a data frame is transmitted to the external memory, a frame header portion of the data frame is stored on the switch for processing by decision making logic. The switch memory is configured to store a number of the frame headers corresponding to each port on the switch along with frame pointer information indicating the location in the external memory where the data frame is stored.Type: GrantFiled: May 21, 1999Date of Patent: June 10, 2003Assignee: Advanced Micro Devices, Inc.Inventors: Jinqlih Sang, Michael Vengchong Lau
-
Patent number: 6563818Abstract: A network switch configured for switching data frames across multiple ports utilizes an efficient arbiter to store the data frames. Each port possesses queuing logic for requesting a free pointer from a free buffer queue. A multi-level arbitration logic arbitrates all the requests of equal priority from the network switch ports in a round robin scheme. The arbitration logic comprises a plurality of cells that cascaded to output an acknowledgement signal in response to an inhibit signal and a request signal as well as a counter that is incremented upon an asserted acknowledgement signal.Type: GrantFiled: May 20, 1999Date of Patent: May 13, 2003Assignee: Advanced Micro Devices, Inc.Inventors: Jinqlih Sang, Edward Yang
-
Patent number: 6401147Abstract: A programmable split-queue structure includes a first queue area for receiving entries, a second queue area for outputting entries input to said first queue area, and a queue overflow engine logically coupled to the first queue area and the second queue area. The queue overflow engine functions to transfer entries from the first queue area to the second queue area using one of two transfer modes. The queue overflow engine selects the most appropriate transfer mode based on a prescribed threshold value that can be dynamically programmed. An overflow storage area having high capacity may be provided in an external memory in order to increase the overall capacity of the queue structure.Type: GrantFiled: May 24, 1999Date of Patent: June 4, 2002Assignee: Advanced Micro Devices, Inc.Inventors: Jinqlih Sang, Edward Yang, Bahadir Erimli
-
Patent number: 6192028Abstract: A network switch having a shared memory architecture for storing data frames has a set of programmable thresholds that specify when flow control should be initiated on a selected network port. The network switch includes a queue for storing free frame pointers, each specifying available memory locations in an external memory for storing data frames received from a network station. The network switch takes a frame pointer from a free buffer queue for each received data frame, and stores the received data frame in the location in external memory specified by the frame pointer while a decision making engine within the switch determines the appropriate destination ports.Type: GrantFiled: December 18, 1997Date of Patent: February 20, 2001Assignee: Advanced Micro Devices, Inc.Inventors: Philip Simmons, Bahadir Erimli, Jinqlih Sang, Eric Tsin-Ho Leung, Ian Crayford, Jayant Kadambi, Denise Kerstein, Thomas Jefferson Runaldue
-
Patent number: 6167054Abstract: A network having a shared memory architecture for storing data frames has a set of programmable thresholds that specify when flow control should be initiated on full-duplex network ports. The network switch includes a queue for storing free frame pointers that specify available memory locations in an external memory for storing data frames received from a network station. The network switch takes a frame pointer from a free buffer queue for each received data frame, and stores the received data frame in the location in external memory specified by the frame pointer while a decision making engine within the switch determines the appropriate destination ports. Flow control is initiated based on the number of available frame pointers by transmitting a PAUSE frame having a selected PAUSE interval to a transmitting network station.Type: GrantFiled: December 18, 1997Date of Patent: December 26, 2000Assignee: Advanced Micro Devices, Inc.Inventors: Philip Simmons, Bahadir Erimli, Jinqlih Sang, Peter Ka-Fai Chow, Ian Crayford, Jayant Kadambi, Denise Kerstein, Thomas Jefferson Runaldue