Patents by Inventor CHUNG-LUN CHAN

CHUNG-LUN CHAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230404845
    Abstract: A foam stick includes a tube body, a core, and two covers. The tube body has an accommodating groove with two open ends. The core is disposed in the accommodating groove of the tube body. The two covers are disposed at both ends of the tube body to close the accommodating groove. The foam stick is composed of the tube body, the core and the covers, so the material utilization rate is high. Through the combination of the tube body, the core and the covers, the foam stick has different characteristics and can be used as rollers or floats.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 21, 2023
    Inventors: TIEN-SZU HSU, SHU-JIN CHEN, YI-PING CHUNG, JIA-RUI XU, GENG-LIN LIU, CHUNG-LUN CHAN
  • Publication number: 20230405405
    Abstract: A balance pad includes a base. A plurality of first pillars and a peripheral wall are uprightly disposed on the base. The peripheral wall surrounds the first pillars so that the first pillars are located within the peripheral wall. Thereby, the first pillars reduce the stability of the balance pad to increase the difficulty of doing balance exercise. The peripheral wall surrounds the first pillars, so that the first pillars can abut against the peripheral wall to avoid excessive inclination of the first pillars to cause injury to a user.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 21, 2023
    Inventors: TIEN-SZU HSU, SHU-JIN CHEN, YI-PING CHUNG, JIA-RUI XU, GENG-LIN LIU, CHUNG-LUN CHAN
  • Patent number: 10318427
    Abstract: An instruction in a first cache line may be identified and an address associated with the instruction may be determined. The address may be determined to cross a cache line boundary associated with the first cache line and a second cache line. In response to determining that the address crosses the cache line boundary, the instruction may be adjusted based on a portion of the address included in the first cache line and a second instruction may be created based on a portion of the address included in the second cache line. The second instruction may be injected into an instruction pipeline after the adjusted instruction.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: June 11, 2019
    Assignee: Intel Corporation
    Inventors: Ramon Matas, Chung-Lun Chan, Alexey P. Suprun, Aditya Kesiraju
  • Patent number: 10261904
    Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventors: Chunhui Zhang, George Z. Chrysos, Edward T. Grochowski, Ramacharan Sundararaman, Chung-Lun Chan, Federico Ardanaz
  • Patent number: 10108554
    Abstract: Methods, systems, and apparatuses relating to sharing translation lookaside buffer entries are described. In one embodiment, a processor includes one or more cores to execute a plurality of threads, a translation lookaside buffer comprising a plurality of entries, each entry comprising a virtual address to physical address translation and a plurality of bit positions, and each set bit of the plurality of bit positions in each entry indicating that the virtual address to physical address translation is valid for a respective thread of the plurality of threads, and a memory management circuit to clear all set bits for a thread by asserting a reset command to a respective reset port of the translation lookaside buffer for the thread, wherein the translation lookaside buffer comprises a separate reset port for each of the plurality of threads.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: October 23, 2018
    Assignee: Intel Corporation
    Inventors: Chung-Lun Chan, Ramon Matas
  • Publication number: 20180173628
    Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.
    Type: Application
    Filed: December 7, 2017
    Publication date: June 21, 2018
    Inventors: CHUNHUI ZHANG, GEORGE Z. CHRYSOS, EDWARD T. GROCHOWSKI, RAMACHARAN SUNDARARAMAN, CHUNG-LUN CHAN, FEDERICO ARDANAZ
  • Publication number: 20180157598
    Abstract: Methods, systems, and apparatuses relating to sharing translation lookaside buffer entries are described. In one embodiment, a processor includes one or more cores to execute a plurality of threads, a translation lookaside buffer comprising a plurality of entries, each entry comprising a virtual address to physical address translation and a plurality of bit positions, and each set bit of the plurality of bit positions in each entry indicating that the virtual address to physical address translation is valid for a respective thread of the plurality of threads, and a memory management circuit to clear all set bits for a thread by asserting a reset command to a respective reset port of the translation lookaside buffer for the thread, wherein the translation lookaside buffer comprises a separate reset port for each of the plurality of threads.
    Type: Application
    Filed: December 5, 2016
    Publication date: June 7, 2018
    Inventors: CHUNG-LUN CHAN, RAMON MATAS
  • Patent number: 9891914
    Abstract: An apparatus and method for performing an efficient scatter operation. For example, one embodiment of a processor comprises: an allocator unit to receive a scatter operation comprising a number of data elements and responsively allocate resources to execute the scatter operation; a memory execution cluster comprising at least a portion of the resources to execute the scatter operation, the resources including one or more store data buffers and one or more store address buffers; and a senior store pipeline to transfer store data elements from the store data buffers to system memory using addresses from the store address buffers prior to retirement of the scatter operation.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: February 13, 2018
    Assignee: Intel Corporation
    Inventors: Ramon Matas, Alexey P. Suprun, Roger Gramunt, Chung-Lun Chan, Rammohan Padmanabhan
  • Patent number: 9886396
    Abstract: In one embodiment, a processor includes a frontend unit having an instruction decoder to receive and to decode instructions of a plurality of threads, an execution unit coupled to the instruction decoder to receive and execute the decoded instructions, and an instruction retirement unit having a retirement logic to receive the instructions from the execution unit and to retire the instructions associated with one or more of the threads that have an instruction or an event pending to be retired. The instruction retirement unit includes a thread arbitration logic to select one of the threads at a time and to dispatch the selected thread to the retirement logic for retirement processing.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: February 6, 2018
    Assignee: Intel Corporation
    Inventors: Roger Gramunt, Rammohan Padmanabhan, Ramon Matas, Neal S. Moyer, Benjamin C. Chaffin, Avinash Sodani, Alexey P. Suprun, Vikram S. Sundaram, Chung-Lun Chan, Gerardo A. Fernandez, Julio Gago, Michael S. Yang, Aditya Kesiraju
  • Patent number: 9875185
    Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.
    Type: Grant
    Filed: July 9, 2014
    Date of Patent: January 23, 2018
    Assignee: Intel Corporation
    Inventors: Chunhui Zhang, George Z. Chrysos, Edward T. Grochowski, Ramacharan Sundararaman, Chung-Lun Chan, Federico Ardanaz
  • Patent number: 9715432
    Abstract: Exemplary aspects are directed toward resolving fault suppression in hardware, which at the same time does not incur a performance hit. For example, when multiple instructions are executing simultaneously, a mask can specify which elements need not be executed. If the mask is disabled, those elements do not need to be executed. A determination is then made as to whether a fault happens in one of the elements that have been disabled. If there is a fault in one of the elements that has been disabled, a state machine re-fetches the instructions in a special mode. More specifically, the state machine determines if the fault is on a disabled element, and if the fault is on a disabled element, then the state machine specifies that the fault should be ignored. If during the first execution there was no mask, if there is an error present during execution, then the element is re-run with the mask to see if the error is a “real” fault.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: July 25, 2017
    Assignee: INTEL CORPORATION
    Inventors: Ramon Matas, Roger Gramunt, Chung-Lun Chan, Benjamin C. Chaffin, Aditya Kesiraju, Jonathan C. Hall, Jesus Corbal
  • Publication number: 20160299762
    Abstract: An apparatus and method for performing an efficient scatter operation. For example, one embodiment of a processor comprises: an allocator unit to receive a scatter operation comprising a number of data elements and responsively allocate resources to execute the scatter operation; a memory execution cluster comprising at least a portion of the resources to execute the scatter operation, the resources including one or more store data buffers and one or more store address buffers; and a senior store pipeline to transfer store data elements from the store data buffers to system memory using addresses from the store address buffers prior to retirement of the scatter operation.
    Type: Application
    Filed: April 10, 2015
    Publication date: October 13, 2016
    Inventors: RAMON MATAS, ALEXEY P. SUPRUN, ROGER GRAMUNT, CHUNG-LUN CHAN, RAMMOHAN PADMANABHAN
  • Publication number: 20160179677
    Abstract: An instruction in a first cache line may be identified and an address associated with the instruction may be determined. The address may be determined to cross a cache line boundary associated with the first cache line and a second cache line. In response to determining that the address crosses the cache line boundary, the instruction may be adjusted based on a portion of the address included in the first cache line and a second instruction may be created based on a portion of the address included in the second cache line. The second instruction may be injected into an instruction pipeline after the adjusted instruction.
    Type: Application
    Filed: December 18, 2014
    Publication date: June 23, 2016
    Inventors: RAMON MATAS, CHUNG-LUN CHAN, ALEXEY P. SUPRUN, ADITYA KESIRAJU
  • Publication number: 20160179533
    Abstract: In one embodiment, a processor includes a frontend unit having an instruction decoder to receive and to decode instructions of a plurality of threads, an execution unit coupled to the instruction decoder to receive and execute the decoded instructions, and an instruction retirement unit having a retirement logic to receive the instructions from the execution unit and to retire the instructions associated with one or more of the threads that have an instruction or an event pending to be retired. The instruction retirement unit includes a thread arbitration logic to select one of the threads at a time and to dispatch the selected thread to the retirement logic for retirement processing.
    Type: Application
    Filed: December 23, 2014
    Publication date: June 23, 2016
    Inventors: Roger Gramunt, Rammohan Padmanabhan, Ramon Matas, Neal S. Moyer, Benjamin C. Chaffin, Avinash Sodani, Alexey P. Suprun, Vikram S. Sundaram, Chung-Lun Chan, Gerardo A. Fernandez, Julio Gago, Michael S. Yang, Aditya Kesiraju
  • Publication number: 20160179632
    Abstract: Exemplary aspects are directed toward resolving fault suppression in hardware, which at the same time does not incur a performance hit. For example, when multiple instructions are executing simultaneously, a mask can specify which elements need not be executed. If the mask is disabled, those elements do not need to be executed. A determination is then made as to whether a fault happens in one of the elements that have been disabled. If there is a fault in one of the elements that has been disabled, a state machine re-fetches the instructions in a special mode. More specifically, the state machine determines if the fault is on a disabled element, and if the fault is on a disabled element, then the state machine specifies that the fault should be ignored. If during the first execution there was no mask, if there is an error present during execution, then the element is re-run with the mask to see if the error is a “real” fault.
    Type: Application
    Filed: December 23, 2014
    Publication date: June 23, 2016
    Inventors: Ramon MATAS, Roger GRAMUNT, Chung-Lun CHAN, Benjamin C. CHAFFIN, Aditya KESIRAJU, Jonathan C. HALL, Jesus CORBAL
  • Publication number: 20160011977
    Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.
    Type: Application
    Filed: July 9, 2014
    Publication date: January 14, 2016
    Inventors: CHUNHUI ZHANG, GEORGE Z. CHRYSOS, EDWARD T. GROCHOWSKI, RAMACHARAN SUNDARARAMAN, CHUNG-LUN CHAN, FEDERICO ARDANAZ