Patents by Inventor Guozhen Pan
Guozhen Pan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11803752Abstract: Implementations of the present specification provide a model-based prediction method and apparatus. The method includes: a model running environment receives an input tensor of a machine learning model; the model running environment sends a table query request to an embedding running environment, the table query request including the input tensor, to request low-dimensional conversion of the input tensor; the model running environment receives a table query result returned by the embedding running environment, the table query result being obtained by the embedding running environment by performing embedding query and processing based on the input tensor; and the model running environment inputs the table query result into the machine learning model, and runs the machine learning model to complete model-based prediction.Type: GrantFiled: February 2, 2021Date of Patent: October 31, 2023Assignee: Advanced New Technologies Co., Ltd.Inventors: Yongchao Liu, Sizhong Li, Guozhen Pan, Jianguo Xu, Qiyin Huang
-
Patent number: 11361217Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.Type: GrantFiled: July 12, 2021Date of Patent: June 14, 2022Assignee: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
-
Patent number: 11327756Abstract: A first logic circuit included in a processor receives a first digital signal, where the first logic circuit includes a special purpose register, a comparator, and an adder, where the special purpose register stores a first resource balance for executing a smart contract, where the first digital signal includes a resource deduction quota corresponding to a code set in the smart contract. The first logic circuit reads the first resource balance from the special purpose register. The first logic circuit compares, using the comparator, the first resource balance with the resource deduction quota. In response to the first resource balance being greater than or equal to the resource deduction quota, the first logic circuit subtracts, using the adder, the resource deduction quota from the first resource balance to obtain a second resource balance. The first logic circuit stores the second resource balance in the special purpose register.Type: GrantFiled: June 29, 2021Date of Patent: May 10, 2022Assignee: Alipay (Hangzhou) Information Technology Co., Ltd.Inventors: Xuepeng Guo, Kuan Zhao, Ren Guo, Yubo Guo, Haiyuan Gao, Qibin Ren, Zucheng Huang, Lei Zhang, Guozhen Pan, Changzheng Wei, Zhijian Chen, Ying Yan
-
Publication number: 20210342680Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.Type: ApplicationFiled: July 12, 2021Publication date: November 4, 2021Applicant: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
-
Publication number: 20210326132Abstract: A first logic circuit included in a processor receives a first digital signal, where the first logic circuit includes a special purpose register, a comparator, and an adder, where the special purpose register stores a first resource balance for executing a smart contract, where the first digital signal includes a resource deduction quota corresponding to a code set in the smart contract. The first logic circuit reads the first resource balance from the special purpose register. The first logic circuit compares, using the comparator, the first resource balance with the resource deduction quota. In response to the first resource balance being greater than or equal to the resource deduction quota, the first logic circuit subtracts, using the adder, the resource deduction quota from the first resource balance to obtain a second resource balance. The first logic circuit stores the second resource balance in the special purpose register.Type: ApplicationFiled: June 29, 2021Publication date: October 21, 2021Applicant: ALIPAY (HANGZHOU) INFORMATION TECHNOLOGY CO., LTD.Inventors: Xuepeng Guo, Kuan Zhao, Ren Guo, Yubo Guo, Haiyuan Gao, Qibin Ren, Zucheng Huang, Lei Zhang, Guozhen Pan, Changzheng Wei, Zhijian Chen, Ying Yan
-
Patent number: 11113423Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: GrantFiled: January 20, 2021Date of Patent: September 7, 2021Assignee: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Patent number: 11062201Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.Type: GrantFiled: October 30, 2020Date of Patent: July 13, 2021Assignee: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
-
Publication number: 20210158165Abstract: Implementations of the present specification provide a model-based prediction method and apparatus. The method includes: a model running environment receives an input tensor of a machine learning model; the model running environment sends a table query request to an embedding running environment, the table query request including the input tensor, to request low-dimensional conversion of the input tensor; the model running environment receives a table query result returned by the embedding running environment, the table query result being obtained by the embedding running environment by performing embedding query and processing based on the input tensor; and the model running environment inputs the table query result into the machine learning model, and runs the machine learning model to complete model-based prediction.Type: ApplicationFiled: February 2, 2021Publication date: May 27, 2021Inventors: Yongchao LIU, Sizhong LI, Guozhen PAN, Jianguo XU, Qiyin HUANG
-
Publication number: 20210141941Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: ApplicationFiled: January 20, 2021Publication date: May 13, 2021Applicant: Advanced New Technologies Co., LtdInventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Patent number: 10929571Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: GrantFiled: January 14, 2020Date of Patent: February 23, 2021Assignee: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Publication number: 20210049453Abstract: Embodiments of the present specification provide chips and chip-based data processing methods. In an embodiment, a method comprises: obtaining data associated with one or more neural networks transmitted from a server; for each layer of a neural network of the one or more neural networks, configuring, based on the data, a plurality of operator units based on a type of computation each operator unit performs; and invoking the plurality of operator units to perform computations, based on neurons of a layer of the neural network immediately above, of the data for each neuron to produce a value of the neuron.Type: ApplicationFiled: October 30, 2020Publication date: February 18, 2021Applicant: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Jianguo Xu, Yongchao Liu, Haitao Zhang, Qiyin Huang, Guanyin Zhu
-
Publication number: 20200226296Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: ApplicationFiled: January 14, 2020Publication date: July 16, 2020Applicant: Alibaba Group Holding LimitedInventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Patent number: 10657293Abstract: Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for configuring a field programmable gate array (FPGA) based trusted execution environment (TEE) for use in a blockchain network. One of the methods includes storing a device identifier (ID), a first random number, and a first encryption key in a field programmable gate array (FPGA) device; sending an encrypted bitstream to the FPGA device, wherein the encrypted bitstream can be decrypted by the first key into a decrypted bitstream comprising a second random number; receiving an encrypted message from the FPGA device; decrypting the encrypted message from the FPGA device using a third key to produce a decrypted message; in response to decrypting the encrypted message: determining a third random number in the decrypted message; encrypting keys using the third random number; and sending the keys to the FPGA device.Type: GrantFiled: September 30, 2019Date of Patent: May 19, 2020Assignee: Alibaba Group Holding LimitedInventors: Changzheng Wei, Guozhen Pan, Ying Yan, Huabing Du, Boran Zhao, Xuyang Song, Yichen Tu, Ni Zhou, Jianguo Xu
-
Publication number: 20200134400Abstract: A computer-implemented method includes obtaining a trained convolutional neural network comprising one or more convolutional layers, each of the one or more convolutional layers comprising a plurality of filters with known filter parameters; pre-computing a reusable factor for each of the one or more convolutional layers based on the known filter parameters of the trained convolutional neural network; receiving input data to the trained convolutional neural network; computing an output of the each of the one or more convolutional layers using a Winograd convolutional operator based on the pre-computed reusable factor and the input data; and determining output data of the trained convolutional network based on the output of the each of the one or more convolutional layers.Type: ApplicationFiled: April 22, 2019Publication date: April 30, 2020Applicant: Alibaba Group Holding LimitedInventors: Yongchao Liu, Qiyin Huang, Guozhen Pan, Sizhong Li, Jianguo Xu, Haitao Zhang, Lin Wang
-
Patent number: 10635951Abstract: A computer-implemented method includes obtaining a trained convolutional neural network comprising one or more convolutional layers, each of the one or more convolutional layers comprising a plurality of filters with known filter parameters; pre-computing a reusable factor for each of the one or more convolutional layers based on the known filter parameters of the trained convolutional neural network; receiving input data to the trained convolutional neural network; computing an output of the each of the one or more convolutional layers using a Winograd convolutional operator based on the pre-computed reusable factor and the input data; and determining output data of the trained convolutional network based on the output of the each of the one or more convolutional layers.Type: GrantFiled: April 22, 2019Date of Patent: April 28, 2020Assignee: Alibaba Group Holding LimitedInventors: Yongchao Liu, Qiyin Huang, Guozhen Pan, Sizhong Li, Jianguo Xu, Haitao Zhang, Lin Wang