Patents by Inventor Jian OUYANG

Jian OUYANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10140251
    Abstract: A processor and a method for executing a matrix multiplication operation on a processor. A specific implementation of the processor includes a data bus and an array processor having k processing units. The data bus is configured to sequentially read n columns of row vectors from an M×N multiplicand matrix and input same to each processing unit in the array processor, read an n×k submatrix from an N×K multiplier matrix and input each column vector of the submatrix to a corresponding processing unit in the array processor, and output a result obtained by each processing unit after executing a multiplication operation. Each processing unit in the array processor is configured to execute in parallel a vector multiplication operation on the input row and column vectors. Each processing unit includes a Wallace tree multiplier having n multipliers and n?1 adders. This implementation improves the processing efficiency of a matrix multiplication operation.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: November 27, 2018
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Ni Zhou, Wei Qi, Yong Wang, Jian Ouyang
  • Patent number: 10127040
    Abstract: The present application discloses a processor and a method for executing an instruction on a processor. A specific implementation of the processor includes: a host interaction device, an instruction control device, an off-chip memory, an on-chip cache and an array processing device, wherein the host interaction device is configured to exchange data and instructions with a host connected with the processor, wherein the exchanged data has a granularity of a matrix; the off-chip memory is configured to store a matrix received from the host, on which a matrix operation is to be performed; and the instruction control device is configured to convert an external instruction received from the host to a series of memory access instructions and a series of computing instructions and execute the converted instructions. The implementation can improve the execution efficiency of a deep learning algorithm.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: November 13, 2018
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Wei Qi, Jian Ouyang, Yong Wang
  • Publication number: 20180129933
    Abstract: The present application discloses a method and apparatus for processing a data sequence. A specific implementation of the method includes: receiving an inputted to-be-processed data sequence; copying a weight matrix in a recurrent neural network model to an embedded block random access memory (RAM) of a field-programmable gate array (FPGA); processing sequentially each piece of to-be-processed data in the to-be-processed data sequence by using an activation function in the recurrent neural network model and the weight matrix stored in the embedded block RAM; and outputting a processed data sequence corresponding to the to-be-processed data sequence. This implementation improves the data sequence processing efficiency of the recurrent neural network model.
    Type: Application
    Filed: June 9, 2017
    Publication date: May 10, 2018
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yong Wang, Jian Ouyang, Wei Qi, Sizhong Li
  • Publication number: 20180121789
    Abstract: The present application discloses a data processing method and apparatus. A specific implementation of the method includes: receiving floating point data sent from an electronic device; converting the received floating point data into fixed point data according to a data length and a value range of the received floating point data; performing calculation on the obtained fixed point data according to a preset algorithm to obtain result data in a fixed point form; and converting the obtained result data in the fixed point form into result data in a floating point form and sending the result data in the floating point form to the electronic device. This implementation improves the data processing efficiency.
    Type: Application
    Filed: June 9, 2017
    Publication date: May 3, 2018
    Inventors: Jian OUYANG, Wei QI, Yong WANG, Lin LIU
  • Publication number: 20180124023
    Abstract: The present application discloses a method, system and apparatus for storing a website private key plaintext. A specific implementation of the method includes: receiving a public key sent from a terminal configured to perform encryption and decryption, wherein the public key is generated at random by the terminal; encrypting a website private key plaintext by using the public key to generate a website private key ciphertext, wherein the website private key plaintext is pre-acquired; and sending the website private key ciphertext to the terminal, so that the terminal decrypts the website private key ciphertext by using the private key to generate the website private key plaintext and store the website private key plaintext in the terminal. This implementation improves the security of storage of the website private key plaintext.
    Type: Application
    Filed: June 9, 2017
    Publication date: May 3, 2018
    Inventors: Wei QI, Jian OUYANG, Yong WANG, Yichen TU, Sijie YANG
  • Publication number: 20180107630
    Abstract: A processor and a method for executing a matrix multiplication operation on a processor. A specific implementation of the processor includes a data bus and an array processor having k processing units. The data bus is configured to sequentially read n columns of row vectors from an M×N multiplicand matrix and input same to each processing unit in the array processor, read an n×k submatrix from an N×K multiplier matrix and input each column vector of the submatrix to a corresponding processing unit in the array processor, and output a result obtained by each processing unit after executing a multiplication operation. Each processing unit in the array processor is configured to execute in parallel a vector multiplication operation on the input row and column vectors. Each processing unit includes a Wallace tree multiplier having n multipliers and n-1 adders. This implementation improves the processing efficiency of a matrix multiplication operation.
    Type: Application
    Filed: May 9, 2017
    Publication date: April 19, 2018
    Inventors: Ni Zhou, Wei Qi, Yong Wang, Jian Ouyang
  • Publication number: 20180072251
    Abstract: The present application discloses a method and apparatus for operating a field-programmable gate array (FPGA) board in a driverless vehicle. The method according to a specific embodiment includes: collecting driving scenario information on a driving scenario of the driverless vehicle; determining, based on the driving scenario information, a speed at which the driverless vehicle executes a computing operation in the driving scenario; comparing the speed with a speed threshold; switching a working mode of the FPGA board in the driverless vehicle executing the computing operation to reduce power consumption of the FPGA board, in response to the speed being lower than the speed threshold. This embodiment implements the adaptive adjustment of the working mode of the FPGA board, thereby reducing the overall power consumption.
    Type: Application
    Filed: January 20, 2017
    Publication date: March 15, 2018
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Zhao ZHANG, Jian OUYANG, Jing WANG, Peng WU, Liang GAO, Yupeng LI
  • Patent number: 9912349
    Abstract: The present disclosure provides a method and apparatus for processing a floating point number matrix, an apparatus and a computer readable storage medium. In embodiments of the present disclosure, the minimum value of the floating point number model matrix and the maximum value of the floating point number model matrix are obtained according to a floating point number model matrix to be compressed, and then, compression processing is performed for the floating point number model matrix to obtain the fixed point number model matrix according to the bit width, the minimum value of the floating point number model matrix and the maximum value of the floating point number model matrix. The compression processing is performed for the floating point number model matrix of the deep learning model by a fixed point method, to obtain the fixed point number model matrix and reduce the storage space and amount of operation of the deep learning model.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: March 6, 2018
    Assignee: Beijing Baidu Netcom Science And Technology Co., Ltd.
    Inventors: Jian Ouyang, Ni Zhou, Yong Wang, Wei Qi
  • Publication number: 20180052685
    Abstract: The present application discloses a processor and a method for executing an instruction on a processor. The method includes: fetching a to-be-executed instruction, the instruction comprising a source address field, a destination address field, an operation type field, and an operation parameter field; determining, in at least one execution unit, an execution unit controlled by a to-be-generated control signal according to the operation type field, determining a source address and a destination address of data operated by the execution unit controlled by the to-be-generated control signal according to the source address field and the destination address field, and determining a data amount of the data operated by the execution unit controlled by the to-be-generated control signal according to the operation parameter field; generating the control signal; and controlling, by using the control signal, the execution unit in the at least one execution unit to execute an operation.
    Type: Application
    Filed: November 23, 2016
    Publication date: February 22, 2018
    Inventors: Jian Ouyang, Wei Qi, Yong Wang
  • Publication number: 20180032336
    Abstract: The present application discloses a processor and a method for executing an instruction on a processor. A specific implementation of the processor includes: a host interaction device, an instruction control device, an off-chip memory, an on-chip cache and an array processing device, wherein the host interaction device is configured to exchange data and instructions with a host connected with the processor, wherein the exchanged data has a granularity of a matrix; the off-chip memory is configured to store a matrix received from the host, on which a matrix operation is to be performed; and the instruction control device is configured to convert an external instruction received from the host to a series of memory access instructions and a series of computing instructions and execute the converted instructions. The implementation can improve the execution efficiency of a deep learning algorithm.
    Type: Application
    Filed: September 28, 2016
    Publication date: February 1, 2018
    Inventors: Wei QI, Jian OUYANG, Yong WANG
  • Publication number: 20170365306
    Abstract: The present application discloses a data processing method and apparatus. A specific embodiment of the method includes: preprocessing received to-be-processed input data; obtaining a storage address of configuration parameters of the to-be-processed input data based on a result of the preprocessing and a result obtained by linearly fitting an activation function, the configuration parameters being preset according to curve characteristics of the activation function; acquiring the configuration parameters of the to-be-processed input data according to the storage address; and processing the result of the preprocessing of the to-be-processed input data based on the configuration parameters of the to-be-processed input data and a preset circuit structure, to obtain a processing result.
    Type: Application
    Filed: September 30, 2016
    Publication date: December 21, 2017
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Jian Ouyang, Wei Qi, Yong Wang
  • Patent number: 9515831
    Abstract: An example method is provided for an electronic device, which may have a display and an input interface, to perform password authentication. The example method may include generating at least one sequence of input elements for entry during the authentication of the user's password to disguise entry of the user's password via the input interface by increasing the user's contact with the input interface and prompting, on the display, for an entry of the user's password and the at least one sequence of input elements via the input interface. The example method may further include receiving, via the input interface, entry of the user's password and the at least one sequence of input elements and determining whether the authentication is successful by checking whether the received entry is correct.
    Type: Grant
    Filed: October 15, 2014
    Date of Patent: December 6, 2016
    Assignee: VMware, Inc.
    Inventors: Kecheng Lu, Jian Ouyang, James Kiryakoza
  • Publication number: 20160112199
    Abstract: An example method is provided for an electronic device, which may have a display and an input interface, to perform password authentication. The example method may include generating at least one sequence of input elements for entry during the authentication of the user's password to disguise entry of the user's password via the input interface by increasing the user's contact with the input interface and prompting, on the display, for an entry of the user's password and the at least one sequence of input elements via the input interface. The example method may further include receiving, via the input interface, entry of the user's password and the at least one sequence of input elements and determining whether the authentication is successful by checking whether the received entry is correct.
    Type: Application
    Filed: October 15, 2014
    Publication date: April 21, 2016
    Inventors: Kecheng LU, Jian OUYANG, James KIRYAKOZA
  • Publication number: 20160060509
    Abstract: The present disclosure provides a non-metallic cross-linking agent for ultra-high temperature fracturing fluids, and a fracturing fluid, the preparation and use thereof. The non-metallic cross-linking agent of the disclosure can be prepared from the following components in weight percentage: 0.1% to 0.5% of an organic aldehyde, 0.01% to 0.05% of an organic phenol, 0 to 10% of an organic alcohol, 0.05% to 0.5% of an organic acid, and the balance of water. The fracturing fluid of the disclosure can have the following advantages: low damage to the strata, low cost, good temperature resistance, and good gel-breaking performance.
    Type: Application
    Filed: July 24, 2015
    Publication date: March 3, 2016
    Inventors: Chao WANG, Jian OUYANG, Zhuoyan ZHU, Junjie XUE, Feng WANG, Yuanyuan WANG