Patents by Inventor Yangjie Zhou

Yangjie Zhou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12141438
    Abstract: Zero skipping sparsity techniques for reduced data movement between memory and accelerators and reduced computational workload of accelerators. The techniques include detection of zero and near-zero values on the memory. The non-zero values are transferred to the accelerator for computation. The zero and near-zero values are written back within the memory as zero values.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: November 12, 2024
    Assignee: Alibaba Group Holding Limited
    Inventors: Fei Xue, Fei Sun, Yangjie Zhou, Lide Duan, Hongzhong Zheng
  • Patent number: 11804069
    Abstract: The disclosure provides an image clustering method and an image clustering apparatus. The method includes: obtaining new images, and clustering the new images to obtain a first cluster; determining a historical cluster similar to the first cluster as a second cluster from existing historical clusters; obtaining a distance between the first cluster and the second cluster; and generating a target cluster by fusing the first cluster and the second cluster based on the distance. In the image clustering method, with the image clustering apparatus of the disclosure, secondary clustering processing performed on the existing historical clusters based on newly added images is not required, new and old clusters are directly fused.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: October 31, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Lu Gan, Yan Fu, Yangjie Zhou, Lianghui Chen, Shunnan Xu
  • Publication number: 20230026824
    Abstract: A memory system for accelerating graph neural network processing can include an on-host chip memory to cache data needed for processing a current root node. The system can also include a volatile memory interface between the host and non-volatile memory. The volatile memory can be configured to save one or more sets of next root nodes, neighbor nodes and corresponding attributes. The non-volatile memory can have sufficient capacity to store the entire graph data. The non-volatile memory can also be configured to pre-arrange the sets of next root nodes, neighbor nodes and corresponding attributes for storage in the volatile memory.
    Type: Application
    Filed: July 15, 2022
    Publication date: January 26, 2023
    Inventors: Fei XUE, Yangjie ZHOU, Lide DUAN, Hongzhong ZHENG
  • Publication number: 20220343146
    Abstract: This application describes a hardware accelerator, a computer system and a method for accelerating temporal graphic neural networks computations. An exemplary hardware accelerator comprises: a key-graph memory configured to store a key graph; a nodes classification circuit configured to: fetch the key graph from the key-graph memory; receive a current graph for performing temporal GNN computation with the key graph; and identify one or more nodes of the current graph based on a comparison between the key graph and the current graph; and a nodes reconstruction circuit configured to: perform spatial computations on the one or more nodes identified by the node classification circuit to obtain updated nodes; generate an updated key graph based on the key graph and the updated nodes; and store the updated key graph in the key-graph memory for processing a next graph.
    Type: Application
    Filed: April 23, 2021
    Publication date: October 27, 2022
    Inventors: Fei XUE, Yangjie ZHOU, Hongzhong ZHENG
  • Publication number: 20220343145
    Abstract: This application describes a hardware accelerator, a computer system, and a method for accelerating Graph Neural Network (GNN) computations. The hardware accelerator comprises a matrix partitioning circuit configured to partition an adjacency matrix of an input graph for GNN computations into a plurality of sub-matrices; a sub-matrix reordering circuit configured to reorder rows and columns of the plurality of sub-matrices; a tile partitioning circuit configured to divide the plurality of sub-matrices with reordered rows and columns into a plurality of tiles based on processing granularities of one or more processors; and a tile distributing circuit configured to distribute the plurality of tiles to the one or more processors for performing the GNN computations.
    Type: Application
    Filed: April 21, 2021
    Publication date: October 27, 2022
    Inventors: Fei XUE, Yangjie ZHOU, Hongzhong ZHENG
  • Publication number: 20210398026
    Abstract: A method includes: sending, by one or more computers, in response to the number of data providers for federated learning being greater than a first threshold, a data field required for the federated learning to a coordinator, the coordinator comprising a computer; receiving, by one or more computers, from the coordinator, information about one or more data providers comprising the required data field, for determining the data providers comprising the required data field as the remaining data providers, wherein the coordinator stores a data field of each data provider; and performing, by one or more computers, federated learning-based modeling with each of the remaining data providers.
    Type: Application
    Filed: August 30, 2021
    Publication date: December 23, 2021
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Lianghui Chen, Yan Fu, Yangjie Zhou, Jun Fang
  • Publication number: 20210365713
    Abstract: The disclosure provides an image clustering method and an image clustering apparatus. The method includes: obtaining new images, and clustering the new images to obtain a first cluster; determining a historical cluster similar to the first cluster as a second cluster from existing historical clusters; obtaining a distance between the first cluster and the second cluster; and generating a target cluster by fusing the first cluster and the second cluster based on the distance. In the image clustering method, with the image clustering apparatus of the disclosure, secondary clustering processing performed on the existing historical clusters based on newly added images is not required, new and old clusters are directly fused.
    Type: Application
    Filed: August 9, 2021
    Publication date: November 25, 2021
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Lu GAN, Yan FU, Yangjie ZHOU, Lianghui CHEN, Shunnan XU
  • Publication number: 20210234687
    Abstract: A method includes training, in collaboration with a plurality of collaborators, a plurality of tree models based on data of user samples shared with the plurality of collaborators; performing feature importance evaluation on the trained tree models for assigning weights to feature columns generated by respective ones of the tree models; in response to a determination that a linear model is to be trained in collaboration with a first collaborator of the plurality of collaborators, inputting data of a first user sample shared with the first collaborator into a first tree model of the plurality of tree models and one or more second tree models of the plurality of tree models to obtain a plurality of one-hot encoded feature columns; and screening the obtained feature columns based on the respective weights and training the linear model according to the screened feature columns and the data of the first user sample.
    Type: Application
    Filed: March 22, 2021
    Publication date: July 29, 2021
    Inventors: Yangjie Zhou, Lianghui Chen, Jun Fang, Yan Fu