Patents by Inventor Yeong Tat Liew
Yeong Tat Liew has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250141779Abstract: The invention relates to a computer-implemented method (100) of generating and assigning identifier information to each router in a network-on-chip for arbitrating data packets. The method (100) comprises the steps of: generating and assigning a planar-axis coordinate, Z, for each router; generating and assigning an ascending value to each router link layer for every link between two routers of the same coordinate; selecting the origin router with the smallest value; assigning a coordinate comprising a horizontal-axis (X) and a vertical-axis (Y); calculating the distance of the coordinates; generating and assigning coordinates to adjacent routers; applying coordinate shrinking to get all coordinates in positive integer numbers with optimized ascending order; and repeating the steps to complete the generation and assignment of coordinates to all planar-axis routers.Type: ApplicationFiled: January 19, 2024Publication date: May 1, 2025Applicant: SKYECHIP SDN BHDInventors: YIN CHONG HEW, SOON HOCK SAW, KEE XIAN LOW, YEONG TAT LIEW
-
Publication number: 20250138623Abstract: The present invention relates to a system for network-on-chip power management. The system comprising a primary network-on-chip comprises multiple components, each component having a power controller, characterized by a secondary network-on-chip comprises a secondary network-on-chip master node and a plurality of secondary network-on-chip nodes connected thereto, the plurality of secondary network-on-chip nodes associated to the components of the primary network-on-chip for power managing individual and link components of the primary network-on-chip, and a power management unit connected to the secondary network-on-chip master node, configured to polling status registers of the components of the primary network-on-chip for accessing power states of each component, accessing routing information of the components of the primary network-on-chip and sending request to the secondary network-on-chip nodes for powering on or off the associated components of the primary network-chip through the power controller.Type: ApplicationFiled: December 29, 2023Publication date: May 1, 2025Applicant: SKYECHIP SDN BHDInventors: Chee Hak TEH, Yu Ying ONG, Soon Chieh LIM, MUHAMAD AIDIL BIN JAZMI, Yeong Tat LIEW, Weng Li LEOW
-
Patent number: 12199884Abstract: A method comprises the steps of receiving input from a user via user interface and selecting a plurality of flits from a plurality of ingress into a plurality of virtual channels followed by selecting the flits from the virtual channels into a plurality of egress based on the input from the user. The selection of the flits into the virtual channels and the egress characterized by the steps of computing default and elevated bandwidths of the virtual channels, computing default and elevated weights of the virtual channels based on the default and elevated bandwidths and generating a weightage lookup table using the default and elevated weights to perform arbitration weightage lookup for the flits with default and elevated priority levels for selecting the flits into the virtual channels and the egress, wherein the flits from the different ingress comprise different default and elevated weight.Type: GrantFiled: December 13, 2022Date of Patent: January 14, 2025Assignee: SKYECHIP SDN BHDInventors: Yeong Tat Liew, Yu Ying Ong, Soon Chieh Lim, Weng Li Leow, Chee Hak Teh
-
Publication number: 20240297846Abstract: The present invention discloses a computer-implemented method of data transmission for a Network-on-Chip to allow high performance routing through dynamic allocated buffer. The method comprises the steps of transferring command or data in a form of plurality of flits from a source node to a router and further to a destination node, and transmitting the flits from the destination node back to the router, wherein the flits are packetized for transmission according to channel width and transaction width, sequence, and priority routing through physical and virtual channels.Type: ApplicationFiled: June 14, 2023Publication date: September 5, 2024Applicant: SKYECHIP SDN BHDInventors: Chee Hak TEH, Yu Ying ONG, Soon Chieh LIM, Weng Li LEOW, Yeong Tat LIEW, Chuen Heong KHUAN, Manobindra GANDHI, Muhamad Aidil Bin JAZMI
-
Publication number: 20240163223Abstract: The present invention relates to a method (100) for network-on-chip arbitration. The method (100) comprises the steps of receiving input from a user via user interface and selecting a plurality of flits from a plurality of ingress into a plurality of virtual channels followed by selecting the flits from the virtual channels into a plurality of egress based on the input from the user. The selection of the flits into the virtual channels and the egress characterized by the steps of computing default and elevated bandwidths of the virtual channels, computing default and elevated weights of the virtual channels based on the default and elevated bandwidths and generating a weightage lookup table using the default and elevated weights to perform arbitration weightage lookup for the flits with default and elevated priority levels for selecting the flits into the virtual channels and the egress, wherein the flits from the different ingress comprise different default and elevated weight.Type: ApplicationFiled: December 13, 2022Publication date: May 16, 2024Applicant: SKYECHIP SDN BHDInventors: YEONG TAT LIEW, YU YING ONG, SOON CHIEH LIM, WENG LI LEOW, CHEE HAK TEH
-
Publication number: 20240070039Abstract: The present invention relates to a method of debugging a targeted area or the whole network-on-chip (NOC) (101), whereby said targeted area or the whole NOC is triggered to enter into a freeze state before capturing of the state of the targeted area or the whole NOC (101) and unloading of the debug information, before finally said targeted area or the whole NOC is triggered to enter into an unfreeze state to allow forward progress to resume, using existing buffer storage, thus allowing user to debug and identify the source of issue without requiring a significant amount of extra storage.Type: ApplicationFiled: October 7, 2022Publication date: February 29, 2024Inventors: Yu Ying ONG, Chee Hak TEH, Soon Chieh LIM, Weng Li LEOW, Muhamad Aidil BIN JAZMI, Yeong Tat LIEW
-
Patent number: 10853034Abstract: An integrated circuit that includes common factor mass multiplier (CFMM) circuitry is provided that multiplies a common factor operand by a large number of multiplier operands. The CFMM circuitry may be implemented as an instance specific version or a non-instance specific version. The instance specific version might also be fully enumerated so that the hardware doesn't have to be redesigned assuming all possible unique multiplier values are implemented. Either version can be formed on a programmable integrated circuit or an application-specific integrated circuit. CFMM circuitry configured in this way can be used to support convolution neural networks or any operation that requires a straight common factor multiply. Any adder component with the CFMM circuitry may be implemented using bit-serial adders. The bit-serial adders may be further connected in a tree in CNN applications to sum together many input streams.Type: GrantFiled: September 28, 2018Date of Patent: December 1, 2020Assignee: Intel CorporationInventors: Thiam Khean Hah, Jason Gee Hock Ong, Yeong Tat Liew, Carl Ebeling, Vamsi Nalluri
-
Patent number: 10585621Abstract: A systolic array implemented in circuitry of an integrated circuit includes a processing element array including processing elements. The systolic array includes one or more feeder circuits communicatively coupled to the processing element array. Each of the one or more feeder circuits includes a first section configured to receive data stored in memory external to the integrated circuit, and a second section configured to send the received data to the processing element array, wherein data transferring from the memory to the processing element array is double buffered by the first section and the second section. The systolic array also includes one or more drain circuits communicatively coupled to the processing element array, including one or more memory buffers configured to store data output by the processing element array.Type: GrantFiled: September 29, 2017Date of Patent: March 10, 2020Assignee: Intel CorporationInventors: Randy Huang, Yeong Tat Liew, Jason Gee Hock Ong
-
Publication number: 20190303103Abstract: An integrated circuit that includes common factor mass multiplier (CFMM) circuitry is provided that multiplies a common factor operand by a large number of multiplier operands. The CFMM circuitry may be implemented as an instance specific version or a non-instance specific version. The instance specific version might also be fully enumerated so that the hardware doesn't have to be redesigned assuming all possible unique multiplier values are implemented. Either version can be formed on a programmable integrated circuit or an application-specific integrated circuit. CFMM circuitry configured in this way can be used to support convolution neural networks or any operation that requires a straight common factor multiply. Any adder component with the CFMM circuitry may be implemented using bit-serial adders. The bit-serial adders may be further connected in a tree in CNN applications to sum together many input streams.Type: ApplicationFiled: September 28, 2018Publication date: October 3, 2019Inventors: Thiam Khean Hah, Jason Gee Hock Ong, Yeong Tat Liew, Carl Ebeling, Vamsi Nalluri
-
Publication number: 20180307438Abstract: A systolic array implemented in circuitry of an integrated circuit includes a processing element array including processing elements. The systolic array includes one or more feeder circuits communicatively coupled to the processing element array. Each of the one or more feeder circuits includes a first section configured to receive data stored in memory external to the integrated circuit, and a second section configured to send the received data to the processing element array, wherein data transferring from the memory to the processing element array is double buffered by the first section and the second section. The systolic array also includes one or more drain circuits communicatively coupled to the processing element array, including one or more memory buffers configured to store data output by the processing element array.Type: ApplicationFiled: September 29, 2017Publication date: October 25, 2018Inventors: Randy Huang, Yeong Tat Liew, Jason Gee Hock Ong