Patents by Inventor Pi-Cheng HSIAO

Pi-Cheng HSIAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10031573
    Abstract: Energy efficiency is managed in a multi-cluster system. The system detects an event in which a current operating frequency of an active cluster enters or crosses any of one or more predetermined frequency spots of the active cluster, wherein the active cluster includes one or more first processor cores. When the event is detected, the system performs the following steps: (1) identifying a target cluster including one or more second processor cores, wherein the each first processor core in the first cluster and each second processor core in the second cluster have different energy efficiency characteristics; (2) activating at least one second processor core in the second cluster; (3) determining whether to migrate one or more interrupt requests from the first cluster to the second cluster; and (4) determining whether to deactivate at least one first processor core of the active cluster based on a performance and power requirement.
    Type: Grant
    Filed: November 4, 2015
    Date of Patent: July 24, 2018
    Assignee: MediaTek, Inc.
    Inventors: Jia-Ming Chen, Hung-Lin Chou, Pi-Cheng Hsiao, Ya-Ting Chang, Yun-Ching Li, Yu-Ming Lin
  • Publication number: 20180143903
    Abstract: A multi-cluster, multi-processor computing system performs a cache flushing method. The method begins with a cache maintenance hardware engine receiving a request from a processor to flush cache contents to a memory. In response, the cache maintenance hardware engine generates commands to flush the cache contents to thereby remove workload of generating the commands from the processors. The commands are issued to the clusters, with each command specifying a physical address that identifies a cache line to be flushed.
    Type: Application
    Filed: June 12, 2017
    Publication date: May 24, 2018
    Inventors: Ming-Ju Wu, Chien-Hung Lin, Chia-Hao Hsu, Pi-Cheng Hsiao, Shao-Yu Wang
  • Patent number: 9977699
    Abstract: A multi-cluster system having processor cores of different energy efficiency characteristics is configured to operate with high efficiency such that performance and power requirements can be satisfied. The system includes multiple processor cores in a hierarchy of groups. The hierarchy of groups includes: multiple level-1 groups, each level-1 group including one or more of processor cores having identical energy efficiency characteristics, and each level-1 group configured to be assigned tasks by a level-1 scheduler; one or more level-2 groups, each level-2 group including respective level-1 groups, the processor cores in different level-1 groups of the same level-2 group having different energy efficiency characteristics, and each level-2 group configured to be assigned tasks by a respective level-2 scheduler; and a level-3 group including the one or more level-2 groups and configured to be assigned tasks by a level-3 scheduler.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: May 22, 2018
    Assignee: MediaTek, Inc.
    Inventors: Jia-Ming Chen, Hung-Lin Chou, Ya-Ting Chang, Shih-Yen Chiu, Chia-Hao Hsu, Yu-Ming Lin, Wan-Ching Huang, Jen-Chieh Yang, Pi-Cheng Hsiao
  • Publication number: 20170300427
    Abstract: A multi-processor system with cache sharing has a plurality of processor sub-systems and a cache coherence interconnect circuit. The processor sub-systems have a first processor sub-system and a second processor sub-system. The first processor sub-system includes at least one first processor and a first cache coupled to the at least one first processor. The second processor sub-system includes at least one second processor and a second cache coupled to the at least one second processor. The cache coherence interconnect circuit is coupled to the processor sub-systems, and used to obtain a cache line data from an evicted cache line in the first cache, and transfer the obtained cache line data to the second cache for storage.
    Type: Application
    Filed: April 13, 2017
    Publication date: October 19, 2017
    Inventors: Chien-Hung Lin, Ming-Ju Wu, Wei-Hao Chiao, Kun-Geng Lee, Shun-Chieh Chang, Ming-Ku Chang, Chia-Hao Hsu, Pi-Cheng Hsiao
  • Publication number: 20170160782
    Abstract: A multicore processor system utilizes a power manager for improving power consumption. The system includes multiple processing units and multiple power sources. Each power source is connected to two or more processing units. A condition for activating a processing unit is detected. In response to the detected condition, the power manager identifies a power source that is connected to inactive processing units only. The power manager then activates a target processing unit among the inactive processing units connected to the identified power source.
    Type: Application
    Filed: August 12, 2016
    Publication date: June 8, 2017
    Inventors: Ya-Ting Chang, Jia-Ming Chen, Nicholas Ching Hui Tang, Pi-Cheng Hsiao
  • Publication number: 20170023997
    Abstract: A switch interconnect is dynamically controlled at runtime to connect power sources to processing units in a multiprocessor system. Each power source is shareable by the processing units and each processing unit has a required voltage for processing a workload. When a system condition is detected at runtime, the switch interconnect is controlled to change a connection between at least one processing unit and a shared power source to maximize power efficiency. The shared power source is one of the power sources that supports multiple processing units having different required voltages.
    Type: Application
    Filed: March 16, 2016
    Publication date: January 26, 2017
    Inventors: Jia-Ming Chen, Hung-Lin Chou, Pi-Cheng Hsiao, Yen-Lin Lee, Ya-Ting Chang, Jih-Ming Hsu
  • Publication number: 20160314024
    Abstract: A computing system supports a clearance mode for its processor cores. The computing system can transition a target processor core from an active mode into a clearance mode according to a system policy. The system policy determines the number of processor cores to be in the active mode. The transitioning into the clearance mode includes the operations of migrating work from the target processor core to one or more other processor cores in the active mode in the computing system; and removing the target processor core from a scheduling configuration of the computing system to prevent task assignment to the target processor core. When the target processor core is in the clearance mode, the target processor core is maintained in an online idle state in which the target processor core performs no work.
    Type: Application
    Filed: April 14, 2016
    Publication date: October 27, 2016
    Inventors: Ya-Ting Chang, Ming-Ju Wu, Pi-Cheng Chen, Jia-Ming Chen, Chung-Ho Chang, Pi-Cheng Hsiao, Hung-Lin Chou, Shih-Yen Chiu
  • Publication number: 20160139964
    Abstract: A multi-cluster system having processor cores of different energy efficiency characteristics is configured to operate with high efficiency such that performance and power requirements can be satisfied. The system includes multiple processor cores in a hierarchy of groups. The hierarchy of groups includes: multiple level-1 groups, each level-1 group including one or more of processor cores having identical energy efficiency characteristics, and each level-1 group configured to be assigned tasks by a level-1 scheduler; one or more level-2 groups, each level-2 group including respective level-1 groups, the processor cores in different level-1 groups of the same level-2 group having different energy efficiency characteristics, and each level-2 group configured to be assigned tasks by a respective level-2 scheduler; and a level-3 group including the one or more level-2 groups and configured to be assigned tasks by a level-3 scheduler.
    Type: Application
    Filed: November 10, 2015
    Publication date: May 19, 2016
    Inventors: Jia-Ming CHEN, Hung-Lin CHOU, Ya-Ting CHANG, Shih-Yen CHIU, Chia-Hao HSU, Yu-Ming LIN, Wan-Ching HUANG, Jen-Chieh YANG, Pi-Cheng HSIAO
  • Publication number: 20160139655
    Abstract: Energy efficiency is managed in a multi-cluster system. The system detects an event in which a current operating frequency of an active cluster enters or crosses any of one or more predetermined frequency spots of the active cluster, wherein the active cluster includes one or more first processor cores. When the event is detected, the system performs the following steps: (1) identifying a target cluster including one or more second processor cores, wherein the each first processor core in the first cluster and each second processor core in the second cluster have different energy efficiency characteristics; (2) activating at least one second processor core in the second cluster; (3) determining whether to migrate one or more interrupt requests from the first cluster to the second cluster; and (4) determining whether to deactivate at least one first processor core of the active cluster based on a performance and power requirement.
    Type: Application
    Filed: November 4, 2015
    Publication date: May 19, 2016
    Inventors: Jia-Ming CHEN, Hung-Lin CHOU, Pi-Cheng HSIAO, Ya-Ting CHANG, Yun-Ching LI, Yu-Ming LIN
  • Patent number: 8589718
    Abstract: A performance scaling device, a processor having the same, and a performance scaling method thereof are provided. The performance scaling device includes an adaptive voltage scaling unit, a latency prediction unit, and a variable-latency datapath. The adaptive voltage scaling unit generates a plurality of operation voltages and transmits the operation voltages to the variable-latency datapath. The variable-latency datapath operates with different latencies according to the operation voltages and generates an operation latency. The latency prediction unit receives the operation latency and a system latency tolerance and generates a voltage scaling signal for the adaptive voltage scaling unit according to the operation latency and the system latency tolerance. The adaptive voltage scaling unit outputs and scales the operation voltages thereof according to the voltage scaling signal.
    Type: Grant
    Filed: September 14, 2010
    Date of Patent: November 19, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Chi-Hung Lin, Pi-Cheng Hsiao, Tay-Jyi Lin, Gin-Kou Ma
  • Patent number: 8499188
    Abstract: An embodiment of a processing device includes a function unit and a control unit. The function unit receives input data and performs a specific operation to the input data to generate result data. The control unit receives the result data and generates an output signal. The control unit latches the result data according to a first clock signal to generate first data and latches the result data according to a second clock signal to generate second data. The control unit compares the first data with the second data to generate a control signal and selects the first data or the second data to serve as data of the output signal according to the control signal. The second clock signal is delayed from the first clock signal by a predefined time period.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: July 30, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Chou-Kun Lin, Tay-Jyi Lin, Pi-Cheng Hsiao, Yuan-Hua Chu
  • Publication number: 20110314306
    Abstract: A performance scaling device, a processor having the same, and a performance scaling method thereof are provided. The performance scaling device includes an adaptive voltage scaling unit, a latency prediction unit, and a variable-latency datapath. The adaptive voltage scaling unit generates a plurality of operation voltages and transmits the operation voltages to the variable-latency datapath. The variable-latency datapath operates with different latencies according to the operation voltages and generates an operation latency. The latency prediction unit receives the operation latency and a system latency tolerance and generates a voltage scaling signal for the adaptive voltage scaling unit according to the operation latency and the system latency tolerance. The adaptive voltage scaling unit outputs and scales the operation voltages thereof according to the voltage scaling signal.
    Type: Application
    Filed: September 14, 2010
    Publication date: December 22, 2011
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Chi-Hung Lin, Pi-Cheng Hsiao, Tay-Jyi Lin, Gin-Kou Ma
  • Publication number: 20110161690
    Abstract: A voltage scaling system is provided and includes a processor, a latency predictor, a controller, and a voltage supplier. The processor performs functions and includes a function unit with variable-latency. The function unit is divided into several power domains. When the processor performs the functions, the function unit generates a latency signal according to a current circuit execution speed. The latency predictor predicts performance of the processor according to the received latency signal to generate a predication signal. The controller compares a value of the predication signal with at least one reference value. The controller generates control signals according to the comparison result. The voltage supplier couples to a first voltage source providing a high voltage and a second voltage source providing a low voltage. The voltage supplier is switched to provide the high or low voltage to the power domains according to the control signals, respectively.
    Type: Application
    Filed: December 17, 2010
    Publication date: June 30, 2011
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Tay-Jyi LIN, Pi-Cheng HSIAO
  • Publication number: 20110161719
    Abstract: An embodiment of a processing device includes a function unit and a control unit. The function unit receives input data and performs a specific operation to the input data to generate result data. The control unit receives the result data and generates an output signal. The control unit latches the result data according to a first clock signal to generate first data and latches the result data according to a second clock signal to generate second data. The control unit compares the first data with the second data to generate a control signal and selects the first data or the second data to serve as data of the output signal according to the control signal. The second clock signal is delayed from the first clock signal by a predefined time period.
    Type: Application
    Filed: September 24, 2010
    Publication date: June 30, 2011
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Chou-Kun LIN, Tay-Jyi LIN, Pi-Cheng HSIAO, Yuan-Hua CHU