Patents by Inventor Xiaochun Ye

Xiaochun Ye has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10002023
    Abstract: A method and an apparatus for managing and scheduling tasks in a many-core system are presented. The method improves process management efficiency in the many-core system. The method includes, when a process needs to be added to a task linked list, adding a process descriptor pointer of the process to a task descriptor entry corresponding to the process, and adding the task descriptor entry to the task linked list; if a process needs to be deleted, finding a task descriptor entry corresponding to the process, and removing the task descriptor entry from the task linked list; and when a processor core needs to run a new task, removing an available priority index register with a highest priority from a queue of the priority index register.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: June 19, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lunkai Zhang, DongRui Fan, Hao Zhang, Xiaochun Ye
  • Patent number: 9990229
    Abstract: A real-time multi-task scheduling method and apparatus for dynamically scheduling a plurality of tasks in the computing system are disclosed. In the method, a processor of the computing system determines that laxity correction should be performed for a currently scheduled task, and then acquires a remaining execution time of the currently scheduled task according to an execution progress of the currently scheduled task and a time for which the currently scheduled task has been executed. After acquiring a laxity of the currently scheduled task according to the remaining execution time of the currently scheduled task and a deadline of the currently scheduled task, the processor determines a priority of the currently scheduled task according to the laxity of the currently scheduled task, and re-determines a priority queue according to the priority of the task. Then, the processor scheduling the plurality of tasks according to the re-determined priority queue.
    Type: Grant
    Filed: June 4, 2015
    Date of Patent: June 5, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Dongrui Fan, Xiaochun Ye, Da Wang, Hao Zhang
  • Patent number: 9898206
    Abstract: A memory access processing method and apparatus, and a system. The method includes receiving a memory access request sent by a processor, combining multiple memory access requests received within a preset time period to form a new memory access request, where the new memory access request includes a code bit vector corresponding to memory addresses. A first code bit identifier is configured for the code bits that are in the code bit vector and corresponding to the memory addresses accessed by the multiple memory access requests. The method further includes sending the new memory access request to a memory controller, so that the memory controller executes a memory access operation on a memory address corresponding to the first code bit identifier. The method effectively improves memory bandwidth utilization.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: February 20, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Dongrui Fan, Fenglong Song, Da Wang, Xiaochun Ye
  • Patent number: 9703905
    Abstract: The present invention provides a method and a system for simulating multiple processors in parallel, and a scheduler. In this embodiment, the scheduler maps debug interface information of a to-be-simulated processor requiring debugging onto the scheduler during parallel simulation of multiple processors, so that the scheduler is capable of debugging, by using a master thread, the to-be-simulated processor requiring debugging via a debug interface of the to-be-simulated processor requiring debugging pointed by the debug interface information, thereby implementing debugging during parallel simulation of multiple processors.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: July 11, 2017
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Handong Ye, Jiong Cao, Xiaochun Ye, Da Wang
  • Patent number: 9483321
    Abstract: A method and an apparatus for determining a to-be-migrated task based on cache awareness in a computing system having multiple processor cores is disclosed. In the method, the computing system determines a source processor core and a destination processor core according to a load of each processor core. Through respectively monitoring the number of cache misses of each task and the number of executed instructions of each task in the source processor core and the destination processor core, the computing system obtain an average cache miss per kilo instructions of the source processor core and an average cache miss per kilo instructions of the destination processor core. Then, the computing system determines, according to the obtained average cache miss per kilo instructions of the source processor core and the destination processor core, a task to be migrated from the source processor core to the destination processor core.
    Type: Grant
    Filed: April 1, 2015
    Date of Patent: November 1, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yuanchao Xu, Dongrui Fan, Hao Zhang, Xiaochun Ye
  • Publication number: 20160154590
    Abstract: A memory access processing method and apparatus, and a system. The method includes receiving a memory access request sent by a processor, combining multiple memory access requests received within a preset time period to form a new memory access request, where the new memory access request includes a code bit vector corresponding to memory addresses. A first code bit identifier is configured for the code bits that are in the code bit vector and corresponding to the memory addresses accessed by the multiple memory access requests. The method further includes sending the new memory access request to a memory controller, so that the memory controller executes a memory access operation on a memory address corresponding to the first code bit identifier. The method effectively improves memory bandwidth utilization.
    Type: Application
    Filed: February 5, 2016
    Publication date: June 2, 2016
    Inventors: Dongrui Fan, Fenglong Song, Da Wang, Xiaochun Ye
  • Publication number: 20160103709
    Abstract: A method and an apparatus for managing and scheduling tasks in a many-core system are presented. The method improves process management efficiency in the many-core system. The method includes, when a process needs to be added to a task linked list, adding a process descriptor pointer of the process to a task descriptor entry corresponding to the process, and adding the task descriptor entry to the task linked list; if a process needs to be deleted, finding a task descriptor entry corresponding to the process, and removing the task descriptor entry from the task linked list; and when a processor core needs to run a new task, removing an available priority index register with a highest priority from a queue of the priority index register.
    Type: Application
    Filed: December 21, 2015
    Publication date: April 14, 2016
    Inventors: Lunkai Zhang, DongRui Fan, Hao Zhang, Xiaochun Ye
  • Publication number: 20150268996
    Abstract: A real-time multi-task scheduling method and apparatus for dynamically scheduling a plurality of tasks in the computing system are disclosed. In the method, a processor of the computing system determines that laxity correction should be performed for a currently scheduled task, and then acquires a remaining execution time of the currently scheduled task according to an execution progress of the currently scheduled task and a time for which the currently scheduled task has been executed. After acquiring a laxity of the currently scheduled task according to the remaining execution time of the currently scheduled task and a deadline of the currently scheduled task, the processor determines a priority of the currently scheduled task according to the laxity of the currently scheduled task, and re-determines a priority queue according to the priority of the task. Then, the processor scheduling the plurality of tasks according to the re-determined priority queue.
    Type: Application
    Filed: June 4, 2015
    Publication date: September 24, 2015
    Inventors: Dongrui Fan, Xiaochun Ye, Da Wang, Hao Zhang
  • Publication number: 20150205642
    Abstract: A method and an apparatus for determining a to-be-migrated task based on cache awareness in a computing system having multiple processor cores is disclosed. In the method, the computing system determines a source processor core and a destination processor core according to a load of each processor core. Through respectively monitoring the number of cache misses of each task and the number of executed instructions of each task in the source processor core and the destination processor core, the computing system obtain an average cache miss per kilo instructions of the source processor core and an average cache miss per kilo instructions of the destination processor core. Then, the computing system determines, according to the obtained average cache miss per kilo instructions of the source processor core and the destination processor core, a task to be migrated from the source processor core to the destination processor core.
    Type: Application
    Filed: April 1, 2015
    Publication date: July 23, 2015
    Inventors: Yuanchao Xu, Dongrui Fan, Hao Zhang, Xiaochun Ye
  • Publication number: 20140114640
    Abstract: The present invention provides a method and a system for simulating multiple processors in parallel, and a scheduler. In this embodiment, the scheduler maps debug interface information of a to-be-simulated processor requiring debugging onto the scheduler during parallel simulation of multiple processors, so that the scheduler is capable of debugging, by using a master thread, the to-be-simulated processor requiring debugging via a debug interface of the to-be-simulated processor requiring debugging pointed by the debug interface information, thereby implementing debugging during parallel simulation of multiple processors.
    Type: Application
    Filed: December 27, 2013
    Publication date: April 24, 2014
    Applicant: Huawei Technologies Co., Ltd.
    Inventors: Handong YE, Jiong CAO, Xiaochun YE, Da WANG
  • Publication number: 20130231912
    Abstract: A method for simulating multiple processors in parallel is provided. The scheduler create one or more slave threads using a master thread, and determines a processor that is simulated by the master thread and a processor that is simulated by a slave thread, so that the scheduler is capable of using the master thread and the one or more slave threads to invoke, through a first execute interface, the determined processor that is simulated by the master thread and the determined processor that is simulated by the slave thread to execute a corresponding instruction, where the first execute interface is registered with the scheduler by the determined processor that is simulated by the master thread and the determined processor that is simulated by the slave thread. Thus simulation efficiency can be increased and resource utilization can be improved.
    Type: Application
    Filed: August 13, 2012
    Publication date: September 5, 2013
    Applicant: Huawei Technologies Co., Ltd.
    Inventors: Handong Ye, Jiong Cao, Xiaochun Ye, Da Wang