Patents by Inventor Guozhen Pan

Guozhen Pan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11803752
    Abstract: Implementations of the present specification provide a model-based prediction method and apparatus. The method includes: a model running environment receives an input tensor of a machine learning model; the model running environment sends a table query request to an embedding running environment, the table query request including the input tensor, to request low-dimensional conversion of the input tensor; the model running environment receives a table query result returned by the embedding running environment, the table query result being obtained by the embedding running environment by performing embedding query and processing based on the input tensor; and the model running environment inputs the table query result into the machine learning model, and runs the machine learning model to complete model-based prediction.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: October 31, 2023
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Yongchao Liu, Sizhong Li, Guozhen Pan, Jianguo Xu, Qiyin Huang
  • Patent number: 11361217
    Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: June 14, 2022
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
  • Patent number: 11327756
    Abstract: A first logic circuit included in a processor receives a first digital signal, where the first logic circuit includes a special purpose register, a comparator, and an adder, where the special purpose register stores a first resource balance for executing a smart contract, where the first digital signal includes a resource deduction quota corresponding to a code set in the smart contract. The first logic circuit reads the first resource balance from the special purpose register. The first logic circuit compares, using the comparator, the first resource balance with the resource deduction quota. In response to the first resource balance being greater than or equal to the resource deduction quota, the first logic circuit subtracts, using the adder, the resource deduction quota from the first resource balance to obtain a second resource balance. The first logic circuit stores the second resource balance in the special purpose register.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: May 10, 2022
    Assignee: Alipay (Hangzhou) Information Technology Co., Ltd.
    Inventors: Xuepeng Guo, Kuan Zhao, Ren Guo, Yubo Guo, Haiyuan Gao, Qibin Ren, Zucheng Huang, Lei Zhang, Guozhen Pan, Changzheng Wei, Zhijian Chen, Ying Yan
  • Publication number: 20210342680
    Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.
    Type: Application
    Filed: July 12, 2021
    Publication date: November 4, 2021
    Applicant: Advanced New Technologies Co., Ltd.
    Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
  • Publication number: 20210326132
    Abstract: A first logic circuit included in a processor receives a first digital signal, where the first logic circuit includes a special purpose register, a comparator, and an adder, where the special purpose register stores a first resource balance for executing a smart contract, where the first digital signal includes a resource deduction quota corresponding to a code set in the smart contract. The first logic circuit reads the first resource balance from the special purpose register. The first logic circuit compares, using the comparator, the first resource balance with the resource deduction quota. In response to the first resource balance being greater than or equal to the resource deduction quota, the first logic circuit subtracts, using the adder, the resource deduction quota from the first resource balance to obtain a second resource balance. The first logic circuit stores the second resource balance in the special purpose register.
    Type: Application
    Filed: June 29, 2021
    Publication date: October 21, 2021
    Applicant: ALIPAY (HANGZHOU) INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Xuepeng Guo, Kuan Zhao, Ren Guo, Yubo Guo, Haiyuan Gao, Qibin Ren, Zucheng Huang, Lei Zhang, Guozhen Pan, Changzheng Wei, Zhijian Chen, Ying Yan
  • Patent number: 11113423
    Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: September 7, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
  • Patent number: 11062201
    Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: July 13, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
  • Publication number: 20210158165
    Abstract: Implementations of the present specification provide a model-based prediction method and apparatus. The method includes: a model running environment receives an input tensor of a machine learning model; the model running environment sends a table query request to an embedding running environment, the table query request including the input tensor, to request low-dimensional conversion of the input tensor; the model running environment receives a table query result returned by the embedding running environment, the table query result being obtained by the embedding running environment by performing embedding query and processing based on the input tensor; and the model running environment inputs the table query result into the machine learning model, and runs the machine learning model to complete model-based prediction.
    Type: Application
    Filed: February 2, 2021
    Publication date: May 27, 2021
    Inventors: Yongchao LIU, Sizhong LI, Guozhen PAN, Jianguo XU, Qiyin HUANG
  • Publication number: 20210141941
    Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.
    Type: Application
    Filed: January 20, 2021
    Publication date: May 13, 2021
    Applicant: Advanced New Technologies Co., Ltd
    Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
  • Patent number: 10929571
    Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: February 23, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
  • Publication number: 20210049453
    Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.
    Type: Application
    Filed: October 30, 2020
    Publication date: February 18, 2021
    Applicant: Advanced New Technologies Co., Ltd.
    Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
  • Publication number: 20200226296
    Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.
    Type: Application
    Filed: January 14, 2020
    Publication date: July 16, 2020
    Applicant: Alibaba Group Holding Limited
    Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
  • Patent number: 10657293
    Abstract: Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for configuring a field programmable gate array (FPGA) based trusted execution environment (TEE) for use in a blockchain network. One of the methods includes storing a device identifier (ID), a first random number, and a first encryption key in a field programmable gate array (FPGA) device; sending an encrypted bitstream to the FPGA device, wherein the encrypted bitstream can be decrypted by the first key into a decrypted bitstream comprising a second random number; receiving an encrypted message from the FPGA device; decrypting the encrypted message from the FPGA device using a third key to produce a decrypted message; in response to decrypting the encrypted message: determining a third random number in the decrypted message; encrypting keys using the third random number; and sending the keys to the FPGA device.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: May 19, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Changzheng Wei, Guozhen Pan, Ying Yan, Huabing Du, Boran Zhao, Xuyang Song, Yichen Tu, Ni Zhou, Jianguo Xu
  • Publication number: 20200134400
    Abstract: A computer-implemented method includes obtaining a trained convolutional neural network comprising one or more convolutional layers, each of the one or more convolutional layers comprising a plurality of filters with known filter parameters; pre-computing a reusable factor for each of the one or more convolutional layers based on the known filter parameters of the trained convolutional neural network; receiving input data to the trained convolutional neural network; computing an output of the each of the one or more convolutional layers using a Winograd convolutional operator based on the pre-computed reusable factor and the input data; and determining output data of the trained convolutional network based on the output of the each of the one or more convolutional layers.
    Type: Application
    Filed: April 22, 2019
    Publication date: April 30, 2020
    Applicant: Alibaba Group Holding Limited
    Inventors: Yongchao Liu, Qiyin Huang, Guozhen Pan, Sizhong Li, Jianguo Xu, Haitao Zhang, Lin Wang
  • Patent number: 10635951
    Abstract: A computer-implemented method includes obtaining a trained convolutional neural network comprising one or more convolutional layers, each of the one or more convolutional layers comprising a plurality of filters with known filter parameters; pre-computing a reusable factor for each of the one or more convolutional layers based on the known filter parameters of the trained convolutional neural network; receiving input data to the trained convolutional neural network; computing an output of the each of the one or more convolutional layers using a Winograd convolutional operator based on the pre-computed reusable factor and the input data; and determining output data of the trained convolutional network based on the output of the each of the one or more convolutional layers.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: April 28, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Yongchao Liu, Qiyin Huang, Guozhen Pan, Sizhong Li, Jianguo Xu, Haitao Zhang, Lin Wang