Patents by Inventor Weiguang Cai

Weiguang Cai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11093245
    Abstract: A computer system and a memory access technology are provided. In the computer system, when load/store instructions having a dependency relationship is processed, dependency information between a producer load/store instruction and a consumer load/store instruction can be obtained from a processor. A consumer load/store request is sent to a memory controller in the computer system based on the obtained dependency information, so that the memory controller can terminate a dependency relationship between load/store requests in the memory controller locally based on the dependency information in the received consumer load/store request, and execute the consumer load/store request.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: August 17, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Lei Fang, Xi Chen, Weiguang Cai
  • Patent number: 10795826
    Abstract: A translation lookaside buffer (TLB) management method and a multi-core processor are provided. The method includes: receiving, by a first core, a first address translation request; querying a TLB of the first core based on the first address translation request; determining that a first target TLB entry corresponding to the first address translation request is missing in the TLB of the first core, obtaining the first target TLB entry; determining that entry storage in the TLB of the first core is full; determining a second core from cores in an idle state in the multi-core processor; replacing a first entry in the TLB of the first core with the first target TLB entry; storing the first entry in a TLB of the second core. Accordingly, a TLB miss rate is reduced and program execution is accelerated.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: October 6, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lei Fang, Weiguang Cai, Xiongli Gu
  • Patent number: 10740247
    Abstract: A method for accessing an entry in a translation lookaside buffer and a processing chip are provided. In the method, the entry includes at least one combination entry, and the combination entry includes a virtual huge page number, a bit vector field, and a physical huge page number. The physical huge page number is an identifier of N consecutive physical pages corresponding to the N consecutive virtual pages. One entry is used to represent a plurality of virtual-to-physical page mappings, so that when a page table length is fixed, a quantity of entries in the TLB can be increased exponentially, thereby increasing a TLB hit probability, and reducing TLB misses. In this way, a delay in program processing can be reduced, and processing efficiency of the processing chip can be improved.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: August 11, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Weiguang Cai, Xiongli Gu, Lei Fang
  • Publication number: 20190370187
    Abstract: A cache replacement method implemented in a computer including a high-level cache and a low-level cache. The low-level cache and the high-level cache are in an inclusion relationship. The method includes selecting a first cache line in the low-level cache as a to-be-replaced cache line, monitoring whether a hit on a corresponding cache line of the first cache line occurs in the high-level cache, if the hit on the corresponding cache line of the first cache line occurs in the high-level cache before a cache miss occurs in the low-level cache, retaining the first cache line in the low-level cache, and selecting a second cache line as the to-be-replaced cache line. Access to the high-level cache is monitored.
    Type: Application
    Filed: August 19, 2019
    Publication date: December 5, 2019
    Inventors: Jiyang Yu, Lei Fang, Weiguang Cai
  • Publication number: 20190294442
    Abstract: A computer system and a memory access technology are provided. In the computer system, when load/store instructions having a dependency relationship is processed, dependency information between a producer load/store instruction and a consumer load/store instruction can be obtained from a processor. A consumer load/store request is sent to a memory controller in the computer system based on the obtained dependency information, so that the memory controller can terminate a dependency relationship between load/store requests in the memory controller locally based on the dependency information in the received consumer load/store request, and execute the consumer load/store request.
    Type: Application
    Filed: June 12, 2019
    Publication date: September 26, 2019
    Inventors: Lei FANG, Xi CHEN, Weiguang CAI
  • Publication number: 20190108134
    Abstract: A method for accessing an entry in a translation lookaside buffer and a processing chip are provided. In the method, the entry includes at least one combination entry, and the combination entry includes a virtual huge page number, a bit vector field, and a physical huge page number. The physical huge page number is an identifier of N consecutive physical pages corresponding to the N consecutive virtual pages. One entry is used to represent a plurality of virtual-to-physical page mappings, so that when a page table length is fixed, a quantity of entries in the TLB can be increased exponentially, thereby increasing a TLB hit probability, and reducing TLB misses. In this way, a delay in program processing can be reduced, and processing efficiency of the processing chip can be improved.
    Type: Application
    Filed: December 5, 2018
    Publication date: April 11, 2019
    Inventors: Weiguang CAI, Xiongli GU, Lei FANG
  • Publication number: 20190073315
    Abstract: A translation lookaside buffer (TLB) management method and a multi-core processor are provided. the method includes: receiving, by a first core, a first address translation request; querying a TLB of the first core based on the first address translation request; determining that a first target TLB entry corresponding to the first address translation request is missing in the TLB of the first core, obtaining the first target TLB entry; determining that entry storage in the TLB of the first core is full; determining a second core from cores in an idle state in the multi-core processor; replacing a first entry in the TLB of the first core with the first target TLB entry; storing the first entry in a TLB of the second core. Thereby reducing a TLB miss rate and accelerating program execution.
    Type: Application
    Filed: November 2, 2018
    Publication date: March 7, 2019
    Inventors: Lei FANG, Weiguang CAI, Xiongli GU
  • Publication number: 20180101475
    Abstract: Embodiments of the present disclosure disclose a method for combining entries, including: determining N to-be-combined entries, where a cache block indicated by an entry label of each entry of the N entries belongs to a combination range, and the combination range indicates 2a cache blocks; and combining the N entries into a first entry, where an entry label of the first entry indicates the 2a cache blocks, and a sharer number of the first entry includes a sharer number of each entry of the N entries. According to the method, entries in a directory can be combined effectively, thereby improving directory usage efficiency.
    Type: Application
    Filed: December 12, 2017
    Publication date: April 12, 2018
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lei FANG, Xiongli GU, Weiguang CAI
  • Publication number: 20170364442
    Abstract: The present disclosure discloses a method for accessing a data visitor directory in a multi-core system, a directory cache device, a multi-core system, and a directory storage unit. The method includes: receiving a first access request sent by a first processor core, where the first access request is used to access an entry, corresponding to a first data block, in a directory; determining, according to the first access request, that a single-pointer entry array has a first single-pointer entry corresponding to the first data block; when determining, according to the first single-pointer entry, that a sharing entry array has a first sharing entry associated with the first single-pointer entry, determining multiple visitors of the first data block according to the first sharing entry. According to embodiments of the present disclosure, storage resources occupied by a directory can be reduced.
    Type: Application
    Filed: August 14, 2017
    Publication date: December 21, 2017
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Xiongli GU, Lei FANG, Weiguang CAI, Peng LIU
  • Patent number: 9465743
    Abstract: Embodiments of the present invention disclose a method for accessing a cache and a pseudo cache agent (PCA). The method of the present invention is applied to a multiprocessor system, where the system includes at least one NC, at least one PCA conforming to a processor micro-architecture level interconnect protocol is embedded in the NC, the PCA is connected to at least one PCA storage device, and the PCA storage device stores data shared among memories in the multiprocessor system. The method of the present invention includes: if the NC receives a data request, obtaining, by the PCA, target data required in the data request from the PCA storage device connected to the PCA; and sending the target data to a sender of the data request. Embodiments of the present invention are mainly applied to a process of accessing cache data in the multiprocessor system.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Wei Zheng, Jiangen Liu, Gang Liu, Weiguang Cai
  • Patent number: 9197373
    Abstract: The present invention discloses a method for retransmitting a data packet in a quick path interconnect system, and a node. When a first node serves as a sending end, only the first data packet detected to be faulty is retransmitted to a second node, thereby saving system resources that need to be occupied in the data packet retransmission. When the first node serves as a receiving end, it implements that the packet loss does not occur in the first node in a case that the second node only retransmits the second data packet detected to be faulty, thereby ensuring reliability of the data packet transmission based on the QPI bus.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: November 24, 2015
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jiangen Liu, Gang Liu, Weiguang Cai
  • Publication number: 20140108878
    Abstract: The present invention discloses a method for retransmitting a data packet in a quick path interconnect system, and a node. When a first node serves as a sending end, only the first data packet detected to be faulty is retransmitted to a second node, thereby saving system resources that need to be occupied in the data packet retransmission. When the first node serves as a receiving end, it implements that the packet loss does not occur in the first node in a case that the second node only retransmits the second data packet detected to be faulty, thereby ensuring reliability of the data packet transmission based on the QPI bus.
    Type: Application
    Filed: December 16, 2013
    Publication date: April 17, 2014
    Applicant: Huawei Technologies Co., Ltd.
    Inventors: Jiangen Liu, Gang Liu, Weiguang Cai