Patents by Inventor Ni Zhou
Ni Zhou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11113423Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: GrantFiled: January 20, 2021Date of Patent: September 7, 2021Assignee: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Publication number: 20210141941Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: ApplicationFiled: January 20, 2021Publication date: May 13, 2021Applicant: Advanced New Technologies Co., LtdInventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Patent number: 10929571Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: GrantFiled: January 14, 2020Date of Patent: February 23, 2021Assignee: Advanced New Technologies Co., Ltd.Inventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Publication number: 20200226296Abstract: An FPGA hardware device obtains encrypted data of each participant of a secure computing system, where the FPGA hardware device stores at least one first key, where the at least one first key is at least one first key of all participants in the secure computing system or at least one first key of a predetermined number of trusted managers in the secure computing system, where the FPGA hardware device includes an FPGA chip. The FPGA hardware device decrypts the encrypted data of each participant by using a working key of each participant, to obtain plaintext data of each participant, where the working key of each participant is obtained based on a corresponding first key of the at least one first key. The FPGA hardware device performs computing based on the plaintext data of each participant to obtain a computing result. The FPGA hardware device outputs the computing result.Type: ApplicationFiled: January 14, 2020Publication date: July 16, 2020Applicant: Alibaba Group Holding LimitedInventors: Guozhen Pan, Yichen Tu, Ni Zhou, Jianguo Xu, Yongchao Liu
-
Patent number: 10657293Abstract: Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for configuring a field programmable gate array (FPGA) based trusted execution environment (TEE) for use in a blockchain network. One of the methods includes storing a device identifier (ID), a first random number, and a first encryption key in a field programmable gate array (FPGA) device; sending an encrypted bitstream to the FPGA device, wherein the encrypted bitstream can be decrypted by the first key into a decrypted bitstream comprising a second random number; receiving an encrypted message from the FPGA device; decrypting the encrypted message from the FPGA device using a third key to produce a decrypted message; in response to decrypting the encrypted message: determining a third random number in the decrypted message; encrypting keys using the third random number; and sending the keys to the FPGA device.Type: GrantFiled: September 30, 2019Date of Patent: May 19, 2020Assignee: Alibaba Group Holding LimitedInventors: Changzheng Wei, Guozhen Pan, Ying Yan, Huabing Du, Boran Zhao, Xuyang Song, Yichen Tu, Ni Zhou, Jianguo Xu
-
Patent number: 10140251Abstract: A processor and a method for executing a matrix multiplication operation on a processor. A specific implementation of the processor includes a data bus and an array processor having k processing units. The data bus is configured to sequentially read n columns of row vectors from an M×N multiplicand matrix and input same to each processing unit in the array processor, read an n×k submatrix from an N×K multiplier matrix and input each column vector of the submatrix to a corresponding processing unit in the array processor, and output a result obtained by each processing unit after executing a multiplication operation. Each processing unit in the array processor is configured to execute in parallel a vector multiplication operation on the input row and column vectors. Each processing unit includes a Wallace tree multiplier having n multipliers and n?1 adders. This implementation improves the processing efficiency of a matrix multiplication operation.Type: GrantFiled: May 9, 2017Date of Patent: November 27, 2018Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventors: Ni Zhou, Wei Qi, Yong Wang, Jian Ouyang
-
Publication number: 20180107630Abstract: A processor and a method for executing a matrix multiplication operation on a processor. A specific implementation of the processor includes a data bus and an array processor having k processing units. The data bus is configured to sequentially read n columns of row vectors from an M×N multiplicand matrix and input same to each processing unit in the array processor, read an n×k submatrix from an N×K multiplier matrix and input each column vector of the submatrix to a corresponding processing unit in the array processor, and output a result obtained by each processing unit after executing a multiplication operation. Each processing unit in the array processor is configured to execute in parallel a vector multiplication operation on the input row and column vectors. Each processing unit includes a Wallace tree multiplier having n multipliers and n-1 adders. This implementation improves the processing efficiency of a matrix multiplication operation.Type: ApplicationFiled: May 9, 2017Publication date: April 19, 2018Inventors: Ni Zhou, Wei Qi, Yong Wang, Jian Ouyang
-
Patent number: 9912349Abstract: The present disclosure provides a method and apparatus for processing a floating point number matrix, an apparatus and a computer readable storage medium. In embodiments of the present disclosure, the minimum value of the floating point number model matrix and the maximum value of the floating point number model matrix are obtained according to a floating point number model matrix to be compressed, and then, compression processing is performed for the floating point number model matrix to obtain the fixed point number model matrix according to the bit width, the minimum value of the floating point number model matrix and the maximum value of the floating point number model matrix. The compression processing is performed for the floating point number model matrix of the deep learning model by a fixed point method, to obtain the fixed point number model matrix and reduce the storage space and amount of operation of the deep learning model.Type: GrantFiled: June 20, 2017Date of Patent: March 6, 2018Assignee: Beijing Baidu Netcom Science And Technology Co., Ltd.Inventors: Jian Ouyang, Ni Zhou, Yong Wang, Wei Qi