Patents by Inventor Xiao Tao Chang

Xiao Tao Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8151091
    Abstract: A data processing system and method are disclosed. The system comprises an instruction-fetch stage where an instruction is fetched and a specific instruction is input into decode stage; a decode stage where said specific instruction indicates that contents of a register in a register file are used as an index, and then, the register file pointed to by said index is accessed based on said index; an execution stage where an access result of said decode stage is received, and computations are implemented according to the access result of the decode stage.
    Type: Grant
    Filed: May 21, 2009
    Date of Patent: April 3, 2012
    Assignee: International Business Machines Corporation
    Inventors: Xiao Tao Chang, Qiang Liui
  • Publication number: 20120054451
    Abstract: This invention provides a request controlling apparatus, processor and method. The request controlling apparatus is connected to a request storage unit and includes: a queue unit storing flag recording region configured to record a storing flag corresponding to a queue unit in the request storage unit, a comparing means configured to judge whether a incoming first queue unit corresponds to a same message as an already existing queue unit, where the already existing queue unit is in the request storage unit and a flag setting means is configured to set the storing flag corresponding to the already existing queue unit in the queue unit storing flag recording region, to indicate that a message state related to the already existing queue unit will not be stored if the first queue unit corresponds to the same message as in the already existing queue unit.
    Type: Application
    Filed: August 25, 2011
    Publication date: March 1, 2012
    Applicant: International Business Machines Corporation
    Inventors: Xiao Tao Chang, Hubertus Franke, Xiaolu Mei, Kun Wang, Hao Yu
  • Publication number: 20120030421
    Abstract: The invention discloses a method and system of maintaining states for the request queue of a hardware accelerator, wherein the request queue stores therein at least one Coprocessor Request Block (CRB) to be input into the hardware accelerator, the method comprising: receiving, in response to a CRB specified by the request queue is about to enter the hardware accelerator, the state pointer of the specified CRB; acquiring physical storage locations of other CRBs in the request queue that are stored in the request queue and are the same as the state pointer of the specified CRB; controlling the input of the specified CRB and the state information required for processing the specified CRB into a hardware buffer; receiving the state information of the specified CRB that has been processed in the hardware accelerator; if the above physical storage locations are not vacant, then making physical storage locations that are closest on the request queue of the specified CRB as the selected location and storing the recei
    Type: Application
    Filed: May 16, 2011
    Publication date: February 2, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Tao Chang, Huo Ding Li, Xiaolu Mei, Ru Yun Zhang
  • Publication number: 20110246667
    Abstract: A processing unit coupled to a bus for accelerating data transmission and a method for accelerating data transmission. The present invention provides a streaming data transmission mode in which a plurality of data blocks are transmitted via one handshake. The present invention employs handshake save policy, when a processing unit sends a request comprising a plurality of data blocks on a bus, a cache or memory will perform address matching to judge whether there is any hit data block. If there is any hit data block, the cache or memory only needs to reply once and then start to continuously transmit the hit data blocks it possesses. Thus, a separate handshake for each data block is no longer needed.
    Type: Application
    Filed: March 29, 2011
    Publication date: October 6, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Tao Chang, Rui Hou, Wei Liu, Kun Wang, Yu Zhang
  • Publication number: 20110200040
    Abstract: An embodiment of the invention provides an extremum route determining engine and method. The engine includes a memory for storing a path with a weight in a graph and an extremum route determining logic circuit. The logic circuit includes a path reading section for reading the path in the graph, a writing section for updating the weight of the read path according to a predetermined extremum requirement and writing the path whose weight is updated into the memory, and an extremum route determining section for determining an extremum route. The method includes reading a stored path in a graph, the path having a weight, updating the weight of the read path and writing the path whose weight has been updated into a memory, and determining an extremum route. An embodiment of the invention improves the processing speed of extremum route determination.
    Type: Application
    Filed: February 11, 2011
    Publication date: August 18, 2011
    Applicant: International Business Machines Corporation
    Inventors: Xiao Tao Chang, Wei Liu, Kun Wang, Hong Bo Zeng
  • Publication number: 20110161540
    Abstract: A method and apparatus for lock allocation control. When a processor core acquires a lock, other processor cores do not need to constantly poll memory to check whether the required lock is released. Instead, other processor cores will be in sleep state and the next processor core needed will be selectively woken up based on predetermined rule, such that an out-of-order lock contention procedure is turned into an in-order lock allocation procedure. By selectively waking up a processor core that is in sleep state, the method and apparatus can avoid occupying a large amount of bus bandwidth, can avoid cache misses, and can save power consumption of chip.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 30, 2011
    Applicant: International Business Machines Corporation
    Inventors: Xiao Tao Chang, Rui Hou, Yudong Yang, Hong Bo Zeng, Zhen Bo Zhu
  • Publication number: 20110055522
    Abstract: A request control device, request control method, and a multiprocessor cooperation architecture. The request control device is connected to a request storage module and includes a comparing means and an identifier means. The comparing means is configured to determine if an incoming first queue unit corresponds to the same message with a queue unit that has existed in the request storage module. The identifier setting means is configured to set a save identifier of the queue unit that has existed in the request storage module to indicate not to save a state associated with the message if the first queue unit corresponds to the same message with the queue unit that has existed in the request storage module. According to the technical solution of the invention, the access to the memory caused by saving/loading the states is reduced and thereby increases the processing speed of the processor.
    Type: Application
    Filed: August 18, 2010
    Publication date: March 3, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Tao Chang, Wei Liu, Kun Wang, Hong Bo Zeng
  • Publication number: 20100241822
    Abstract: An apparatus and method for managing a translation look-aside buffer (TLB). The TLB is shared by a plurality of jobs. The method including the steps of: obtaining at least one attribute of each job of the plurality of jobs; assigning a priority level to each job according to at least one attribute of each job; and managing the related TLB entries of each job according to the priority level of each job. The present invention also provides an apparatus for managing TLB corresponding to the above method. The method and apparatus according to the present invention provide an efficient use of the shared TLB.
    Type: Application
    Filed: March 17, 2010
    Publication date: September 23, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Tao Chang, Rui Hou, Wei Liu, Kun Wang
  • Publication number: 20100161875
    Abstract: A Simulator and a simulating method for running a guest program in a host are disclosed. The simulator includes: an initialization device configured for setting content of a hypervisor page table in the host, the hypervisor page table mapping a guest physical address space to a host physical address space. The simulator further includes a binary translation device configured for employing a program logical address to perform a memory access in code translation. The simulator also includes a miss handling device configured for updating a guest translation look-aside buffer by treating a miss in a host translation look-aside buffer caused by the execution of the translated code as a miss in the guest translation look-aside buffer, wherein the host translation look-aside buffer is configured to buffer entries for mapping addresses in a guest program logical address space to addresses in the guest physical address space.
    Type: Application
    Filed: December 8, 2009
    Publication date: June 24, 2010
    Applicant: International Business Machines Corporation
    Inventors: Xiao Tao Chang, Huayong Wang, Kun Wang, Yu Zhang
  • Publication number: 20090300330
    Abstract: A data processing system and method are disclosed. The system comprises an instruction-fetch stage where an instruction is fetched and a specific instruction is input into decode stage; a decode stage where said specific instruction indicates that contents of a register in a register file are used as an index, and then, the register file pointed to by said index is accessed based on said index; an execution stage where an access result of said decode stage is received, and computations are implemented according to the access result of the decode stage.
    Type: Application
    Filed: May 21, 2009
    Publication date: December 3, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Tao Chang, Qiang Liu
  • Publication number: 20090193424
    Abstract: The present invention discloses a method of processing instructions in a pipeline-based central processing unit, wherein the pipeline is partitioned into base pipeline stages and enhanced pipeline stages according to functions, the base pipeline stages being activated all the while, and the enhanced pipeline stages being activated or shutdown according to requirements for performance of a workload. The present invention further discloses a method of processing instructions in a pipeline-based central processing unit, wherein the pipeline is partitioned into base pipeline stages and enhanced pipeline stages according to functions, each pipeline stage being partitioned into a base module and at least one enhanced module, the base module being activated all the while, and the enhanced module being activated or shutdown according to requirements for performance of a workload.
    Type: Application
    Filed: January 22, 2009
    Publication date: July 30, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wen Bo Shen, Peng Shao, Yu Li, Xiao Tao Chang, Yi Ge, Huayong Wang, Huan Hao Zou