Patents by Inventor Xiaochun Ye
Xiaochun Ye has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10002023Abstract: A method and an apparatus for managing and scheduling tasks in a many-core system are presented. The method improves process management efficiency in the many-core system. The method includes, when a process needs to be added to a task linked list, adding a process descriptor pointer of the process to a task descriptor entry corresponding to the process, and adding the task descriptor entry to the task linked list; if a process needs to be deleted, finding a task descriptor entry corresponding to the process, and removing the task descriptor entry from the task linked list; and when a processor core needs to run a new task, removing an available priority index register with a highest priority from a queue of the priority index register.Type: GrantFiled: December 21, 2015Date of Patent: June 19, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Lunkai Zhang, DongRui Fan, Hao Zhang, Xiaochun Ye
-
Patent number: 9990229Abstract: A real-time multi-task scheduling method and apparatus for dynamically scheduling a plurality of tasks in the computing system are disclosed. In the method, a processor of the computing system determines that laxity correction should be performed for a currently scheduled task, and then acquires a remaining execution time of the currently scheduled task according to an execution progress of the currently scheduled task and a time for which the currently scheduled task has been executed. After acquiring a laxity of the currently scheduled task according to the remaining execution time of the currently scheduled task and a deadline of the currently scheduled task, the processor determines a priority of the currently scheduled task according to the laxity of the currently scheduled task, and re-determines a priority queue according to the priority of the task. Then, the processor scheduling the plurality of tasks according to the re-determined priority queue.Type: GrantFiled: June 4, 2015Date of Patent: June 5, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Dongrui Fan, Xiaochun Ye, Da Wang, Hao Zhang
-
Patent number: 9898206Abstract: A memory access processing method and apparatus, and a system. The method includes receiving a memory access request sent by a processor, combining multiple memory access requests received within a preset time period to form a new memory access request, where the new memory access request includes a code bit vector corresponding to memory addresses. A first code bit identifier is configured for the code bits that are in the code bit vector and corresponding to the memory addresses accessed by the multiple memory access requests. The method further includes sending the new memory access request to a memory controller, so that the memory controller executes a memory access operation on a memory address corresponding to the first code bit identifier. The method effectively improves memory bandwidth utilization.Type: GrantFiled: February 5, 2016Date of Patent: February 20, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Dongrui Fan, Fenglong Song, Da Wang, Xiaochun Ye
-
Patent number: 9703905Abstract: The present invention provides a method and a system for simulating multiple processors in parallel, and a scheduler. In this embodiment, the scheduler maps debug interface information of a to-be-simulated processor requiring debugging onto the scheduler during parallel simulation of multiple processors, so that the scheduler is capable of debugging, by using a master thread, the to-be-simulated processor requiring debugging via a debug interface of the to-be-simulated processor requiring debugging pointed by the debug interface information, thereby implementing debugging during parallel simulation of multiple processors.Type: GrantFiled: December 27, 2013Date of Patent: July 11, 2017Assignee: Huawei Technologies Co., Ltd.Inventors: Handong Ye, Jiong Cao, Xiaochun Ye, Da Wang
-
Patent number: 9483321Abstract: A method and an apparatus for determining a to-be-migrated task based on cache awareness in a computing system having multiple processor cores is disclosed. In the method, the computing system determines a source processor core and a destination processor core according to a load of each processor core. Through respectively monitoring the number of cache misses of each task and the number of executed instructions of each task in the source processor core and the destination processor core, the computing system obtain an average cache miss per kilo instructions of the source processor core and an average cache miss per kilo instructions of the destination processor core. Then, the computing system determines, according to the obtained average cache miss per kilo instructions of the source processor core and the destination processor core, a task to be migrated from the source processor core to the destination processor core.Type: GrantFiled: April 1, 2015Date of Patent: November 1, 2016Assignee: Huawei Technologies Co., Ltd.Inventors: Yuanchao Xu, Dongrui Fan, Hao Zhang, Xiaochun Ye
-
Publication number: 20160154590Abstract: A memory access processing method and apparatus, and a system. The method includes receiving a memory access request sent by a processor, combining multiple memory access requests received within a preset time period to form a new memory access request, where the new memory access request includes a code bit vector corresponding to memory addresses. A first code bit identifier is configured for the code bits that are in the code bit vector and corresponding to the memory addresses accessed by the multiple memory access requests. The method further includes sending the new memory access request to a memory controller, so that the memory controller executes a memory access operation on a memory address corresponding to the first code bit identifier. The method effectively improves memory bandwidth utilization.Type: ApplicationFiled: February 5, 2016Publication date: June 2, 2016Inventors: Dongrui Fan, Fenglong Song, Da Wang, Xiaochun Ye
-
Publication number: 20160103709Abstract: A method and an apparatus for managing and scheduling tasks in a many-core system are presented. The method improves process management efficiency in the many-core system. The method includes, when a process needs to be added to a task linked list, adding a process descriptor pointer of the process to a task descriptor entry corresponding to the process, and adding the task descriptor entry to the task linked list; if a process needs to be deleted, finding a task descriptor entry corresponding to the process, and removing the task descriptor entry from the task linked list; and when a processor core needs to run a new task, removing an available priority index register with a highest priority from a queue of the priority index register.Type: ApplicationFiled: December 21, 2015Publication date: April 14, 2016Inventors: Lunkai Zhang, DongRui Fan, Hao Zhang, Xiaochun Ye
-
Publication number: 20150268996Abstract: A real-time multi-task scheduling method and apparatus for dynamically scheduling a plurality of tasks in the computing system are disclosed. In the method, a processor of the computing system determines that laxity correction should be performed for a currently scheduled task, and then acquires a remaining execution time of the currently scheduled task according to an execution progress of the currently scheduled task and a time for which the currently scheduled task has been executed. After acquiring a laxity of the currently scheduled task according to the remaining execution time of the currently scheduled task and a deadline of the currently scheduled task, the processor determines a priority of the currently scheduled task according to the laxity of the currently scheduled task, and re-determines a priority queue according to the priority of the task. Then, the processor scheduling the plurality of tasks according to the re-determined priority queue.Type: ApplicationFiled: June 4, 2015Publication date: September 24, 2015Inventors: Dongrui Fan, Xiaochun Ye, Da Wang, Hao Zhang
-
Publication number: 20150205642Abstract: A method and an apparatus for determining a to-be-migrated task based on cache awareness in a computing system having multiple processor cores is disclosed. In the method, the computing system determines a source processor core and a destination processor core according to a load of each processor core. Through respectively monitoring the number of cache misses of each task and the number of executed instructions of each task in the source processor core and the destination processor core, the computing system obtain an average cache miss per kilo instructions of the source processor core and an average cache miss per kilo instructions of the destination processor core. Then, the computing system determines, according to the obtained average cache miss per kilo instructions of the source processor core and the destination processor core, a task to be migrated from the source processor core to the destination processor core.Type: ApplicationFiled: April 1, 2015Publication date: July 23, 2015Inventors: Yuanchao Xu, Dongrui Fan, Hao Zhang, Xiaochun Ye
-
Publication number: 20140114640Abstract: The present invention provides a method and a system for simulating multiple processors in parallel, and a scheduler. In this embodiment, the scheduler maps debug interface information of a to-be-simulated processor requiring debugging onto the scheduler during parallel simulation of multiple processors, so that the scheduler is capable of debugging, by using a master thread, the to-be-simulated processor requiring debugging via a debug interface of the to-be-simulated processor requiring debugging pointed by the debug interface information, thereby implementing debugging during parallel simulation of multiple processors.Type: ApplicationFiled: December 27, 2013Publication date: April 24, 2014Applicant: Huawei Technologies Co., Ltd.Inventors: Handong YE, Jiong CAO, Xiaochun YE, Da WANG
-
Publication number: 20130231912Abstract: A method for simulating multiple processors in parallel is provided. The scheduler create one or more slave threads using a master thread, and determines a processor that is simulated by the master thread and a processor that is simulated by a slave thread, so that the scheduler is capable of using the master thread and the one or more slave threads to invoke, through a first execute interface, the determined processor that is simulated by the master thread and the determined processor that is simulated by the slave thread to execute a corresponding instruction, where the first execute interface is registered with the scheduler by the determined processor that is simulated by the master thread and the determined processor that is simulated by the slave thread. Thus simulation efficiency can be increased and resource utilization can be improved.Type: ApplicationFiled: August 13, 2012Publication date: September 5, 2013Applicant: Huawei Technologies Co., Ltd.Inventors: Handong Ye, Jiong Cao, Xiaochun Ye, Da Wang