Patents by Inventor CHUNG-LUN CHAN
CHUNG-LUN CHAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230404845Abstract: A foam stick includes a tube body, a core, and two covers. The tube body has an accommodating groove with two open ends. The core is disposed in the accommodating groove of the tube body. The two covers are disposed at both ends of the tube body to close the accommodating groove. The foam stick is composed of the tube body, the core and the covers, so the material utilization rate is high. Through the combination of the tube body, the core and the covers, the foam stick has different characteristics and can be used as rollers or floats.Type: ApplicationFiled: June 17, 2022Publication date: December 21, 2023Inventors: TIEN-SZU HSU, SHU-JIN CHEN, YI-PING CHUNG, JIA-RUI XU, GENG-LIN LIU, CHUNG-LUN CHAN
-
Publication number: 20230405405Abstract: A balance pad includes a base. A plurality of first pillars and a peripheral wall are uprightly disposed on the base. The peripheral wall surrounds the first pillars so that the first pillars are located within the peripheral wall. Thereby, the first pillars reduce the stability of the balance pad to increase the difficulty of doing balance exercise. The peripheral wall surrounds the first pillars, so that the first pillars can abut against the peripheral wall to avoid excessive inclination of the first pillars to cause injury to a user.Type: ApplicationFiled: June 17, 2022Publication date: December 21, 2023Inventors: TIEN-SZU HSU, SHU-JIN CHEN, YI-PING CHUNG, JIA-RUI XU, GENG-LIN LIU, CHUNG-LUN CHAN
-
Patent number: 10318427Abstract: An instruction in a first cache line may be identified and an address associated with the instruction may be determined. The address may be determined to cross a cache line boundary associated with the first cache line and a second cache line. In response to determining that the address crosses the cache line boundary, the instruction may be adjusted based on a portion of the address included in the first cache line and a second instruction may be created based on a portion of the address included in the second cache line. The second instruction may be injected into an instruction pipeline after the adjusted instruction.Type: GrantFiled: December 18, 2014Date of Patent: June 11, 2019Assignee: Intel CorporationInventors: Ramon Matas, Chung-Lun Chan, Alexey P. Suprun, Aditya Kesiraju
-
Patent number: 10261904Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.Type: GrantFiled: December 7, 2017Date of Patent: April 16, 2019Assignee: Intel CorporationInventors: Chunhui Zhang, George Z. Chrysos, Edward T. Grochowski, Ramacharan Sundararaman, Chung-Lun Chan, Federico Ardanaz
-
Patent number: 10108554Abstract: Methods, systems, and apparatuses relating to sharing translation lookaside buffer entries are described. In one embodiment, a processor includes one or more cores to execute a plurality of threads, a translation lookaside buffer comprising a plurality of entries, each entry comprising a virtual address to physical address translation and a plurality of bit positions, and each set bit of the plurality of bit positions in each entry indicating that the virtual address to physical address translation is valid for a respective thread of the plurality of threads, and a memory management circuit to clear all set bits for a thread by asserting a reset command to a respective reset port of the translation lookaside buffer for the thread, wherein the translation lookaside buffer comprises a separate reset port for each of the plurality of threads.Type: GrantFiled: December 5, 2016Date of Patent: October 23, 2018Assignee: Intel CorporationInventors: Chung-Lun Chan, Ramon Matas
-
Publication number: 20180173628Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.Type: ApplicationFiled: December 7, 2017Publication date: June 21, 2018Inventors: CHUNHUI ZHANG, GEORGE Z. CHRYSOS, EDWARD T. GROCHOWSKI, RAMACHARAN SUNDARARAMAN, CHUNG-LUN CHAN, FEDERICO ARDANAZ
-
Publication number: 20180157598Abstract: Methods, systems, and apparatuses relating to sharing translation lookaside buffer entries are described. In one embodiment, a processor includes one or more cores to execute a plurality of threads, a translation lookaside buffer comprising a plurality of entries, each entry comprising a virtual address to physical address translation and a plurality of bit positions, and each set bit of the plurality of bit positions in each entry indicating that the virtual address to physical address translation is valid for a respective thread of the plurality of threads, and a memory management circuit to clear all set bits for a thread by asserting a reset command to a respective reset port of the translation lookaside buffer for the thread, wherein the translation lookaside buffer comprises a separate reset port for each of the plurality of threads.Type: ApplicationFiled: December 5, 2016Publication date: June 7, 2018Inventors: CHUNG-LUN CHAN, RAMON MATAS
-
Patent number: 9891914Abstract: An apparatus and method for performing an efficient scatter operation. For example, one embodiment of a processor comprises: an allocator unit to receive a scatter operation comprising a number of data elements and responsively allocate resources to execute the scatter operation; a memory execution cluster comprising at least a portion of the resources to execute the scatter operation, the resources including one or more store data buffers and one or more store address buffers; and a senior store pipeline to transfer store data elements from the store data buffers to system memory using addresses from the store address buffers prior to retirement of the scatter operation.Type: GrantFiled: April 10, 2015Date of Patent: February 13, 2018Assignee: Intel CorporationInventors: Ramon Matas, Alexey P. Suprun, Roger Gramunt, Chung-Lun Chan, Rammohan Padmanabhan
-
Patent number: 9886396Abstract: In one embodiment, a processor includes a frontend unit having an instruction decoder to receive and to decode instructions of a plurality of threads, an execution unit coupled to the instruction decoder to receive and execute the decoded instructions, and an instruction retirement unit having a retirement logic to receive the instructions from the execution unit and to retire the instructions associated with one or more of the threads that have an instruction or an event pending to be retired. The instruction retirement unit includes a thread arbitration logic to select one of the threads at a time and to dispatch the selected thread to the retirement logic for retirement processing.Type: GrantFiled: December 23, 2014Date of Patent: February 6, 2018Assignee: Intel CorporationInventors: Roger Gramunt, Rammohan Padmanabhan, Ramon Matas, Neal S. Moyer, Benjamin C. Chaffin, Avinash Sodani, Alexey P. Suprun, Vikram S. Sundaram, Chung-Lun Chan, Gerardo A. Fernandez, Julio Gago, Michael S. Yang, Aditya Kesiraju
-
Patent number: 9875185Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.Type: GrantFiled: July 9, 2014Date of Patent: January 23, 2018Assignee: Intel CorporationInventors: Chunhui Zhang, George Z. Chrysos, Edward T. Grochowski, Ramacharan Sundararaman, Chung-Lun Chan, Federico Ardanaz
-
Patent number: 9715432Abstract: Exemplary aspects are directed toward resolving fault suppression in hardware, which at the same time does not incur a performance hit. For example, when multiple instructions are executing simultaneously, a mask can specify which elements need not be executed. If the mask is disabled, those elements do not need to be executed. A determination is then made as to whether a fault happens in one of the elements that have been disabled. If there is a fault in one of the elements that has been disabled, a state machine re-fetches the instructions in a special mode. More specifically, the state machine determines if the fault is on a disabled element, and if the fault is on a disabled element, then the state machine specifies that the fault should be ignored. If during the first execution there was no mask, if there is an error present during execution, then the element is re-run with the mask to see if the error is a “real” fault.Type: GrantFiled: December 23, 2014Date of Patent: July 25, 2017Assignee: INTEL CORPORATIONInventors: Ramon Matas, Roger Gramunt, Chung-Lun Chan, Benjamin C. Chaffin, Aditya Kesiraju, Jonathan C. Hall, Jesus Corbal
-
Publication number: 20160299762Abstract: An apparatus and method for performing an efficient scatter operation. For example, one embodiment of a processor comprises: an allocator unit to receive a scatter operation comprising a number of data elements and responsively allocate resources to execute the scatter operation; a memory execution cluster comprising at least a portion of the resources to execute the scatter operation, the resources including one or more store data buffers and one or more store address buffers; and a senior store pipeline to transfer store data elements from the store data buffers to system memory using addresses from the store address buffers prior to retirement of the scatter operation.Type: ApplicationFiled: April 10, 2015Publication date: October 13, 2016Inventors: RAMON MATAS, ALEXEY P. SUPRUN, ROGER GRAMUNT, CHUNG-LUN CHAN, RAMMOHAN PADMANABHAN
-
Publication number: 20160179533Abstract: In one embodiment, a processor includes a frontend unit having an instruction decoder to receive and to decode instructions of a plurality of threads, an execution unit coupled to the instruction decoder to receive and execute the decoded instructions, and an instruction retirement unit having a retirement logic to receive the instructions from the execution unit and to retire the instructions associated with one or more of the threads that have an instruction or an event pending to be retired. The instruction retirement unit includes a thread arbitration logic to select one of the threads at a time and to dispatch the selected thread to the retirement logic for retirement processing.Type: ApplicationFiled: December 23, 2014Publication date: June 23, 2016Inventors: Roger Gramunt, Rammohan Padmanabhan, Ramon Matas, Neal S. Moyer, Benjamin C. Chaffin, Avinash Sodani, Alexey P. Suprun, Vikram S. Sundaram, Chung-Lun Chan, Gerardo A. Fernandez, Julio Gago, Michael S. Yang, Aditya Kesiraju
-
Publication number: 20160179677Abstract: An instruction in a first cache line may be identified and an address associated with the instruction may be determined. The address may be determined to cross a cache line boundary associated with the first cache line and a second cache line. In response to determining that the address crosses the cache line boundary, the instruction may be adjusted based on a portion of the address included in the first cache line and a second instruction may be created based on a portion of the address included in the second cache line. The second instruction may be injected into an instruction pipeline after the adjusted instruction.Type: ApplicationFiled: December 18, 2014Publication date: June 23, 2016Inventors: RAMON MATAS, CHUNG-LUN CHAN, ALEXEY P. SUPRUN, ADITYA KESIRAJU
-
Publication number: 20160179632Abstract: Exemplary aspects are directed toward resolving fault suppression in hardware, which at the same time does not incur a performance hit. For example, when multiple instructions are executing simultaneously, a mask can specify which elements need not be executed. If the mask is disabled, those elements do not need to be executed. A determination is then made as to whether a fault happens in one of the elements that have been disabled. If there is a fault in one of the elements that has been disabled, a state machine re-fetches the instructions in a special mode. More specifically, the state machine determines if the fault is on a disabled element, and if the fault is on a disabled element, then the state machine specifies that the fault should be ignored. If during the first execution there was no mask, if there is an error present during execution, then the element is re-run with the mask to see if the error is a “real” fault.Type: ApplicationFiled: December 23, 2014Publication date: June 23, 2016Inventors: Ramon MATAS, Roger GRAMUNT, Chung-Lun CHAN, Benjamin C. CHAFFIN, Aditya KESIRAJU, Jonathan C. HALL, Jesus CORBAL
-
Publication number: 20160011977Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.Type: ApplicationFiled: July 9, 2014Publication date: January 14, 2016Inventors: CHUNHUI ZHANG, GEORGE Z. CHRYSOS, EDWARD T. GROCHOWSKI, RAMACHARAN SUNDARARAMAN, CHUNG-LUN CHAN, FEDERICO ARDANAZ