Patents by Inventor Jian OUYANG

Jian OUYANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10951595
    Abstract: The present application discloses a method, system and apparatus for storing a website private key plaintext. A specific implementation of the method includes: receiving a public key sent from a terminal configured to perform encryption and decryption, wherein the public key is generated at random by the terminal; encrypting a website private key plaintext by using the public key to generate a website private key ciphertext, wherein the website private key plaintext is pre-acquired; and sending the website private key ciphertext to the terminal, so that the terminal decrypts the website private key ciphertext by using the private key to generate the website private key plaintext and store the website private key plaintext in the terminal. This implementation improves the security of storage of the website private key plaintext.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: March 16, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Wei Qi, Jian Ouyang, Yong Wang, Yichen Tu, Sijie Yang
  • Publication number: 20210072996
    Abstract: Methods, apparatuses, devices, and storage media for performing a processing task are provided. A portion of portions of the processing task can include a group of operations that are to be performed at a processing unit of processing units. The group of operations can include operations of a first type and operations of a second type. In the method, a first queue for performing the operations of the first type and a second queue for performing the operations of the second type can be built, respectively. Based on a definition of the processing task, a dependency relationship between a group of operations to be performed at the processing unit and a group of operations to be performed at other processing units in the plurality of processing units can be obtained. Operations in the first queue and operations in the second queue can be performed respectively based on the dependency relationship.
    Type: Application
    Filed: December 30, 2019
    Publication date: March 11, 2021
    Inventors: Qingshu CHEN, Zhibiao ZHAO, Hefei ZHU, Xiaozhang GONG, Yong WANG, Jian OUYANG
  • Publication number: 20210049045
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for resource management, an electronic device, and a computer-readable storage medium. The method may include: determining a plurality of virtual functions to be supported, where each of the plurality of virtual functions corresponds to a virtual machine running on a computing device. The method may further include: dividing a physical resource set into a plurality of physical resource subsets according to a predetermined ratio, a number of the physical resource subsets being identical to a number of the virtual functions. The method may further include: allocating the plurality of physical resource subsets to the plurality of virtual functions respectively.
    Type: Application
    Filed: March 4, 2020
    Publication date: February 18, 2021
    Inventors: Xianglun Leng, Zhibiao Zhao, Jinchen Han, Jian Ouyang, Wei Qi, Yong Wang
  • Patent number: 10922785
    Abstract: A processor and method for scaling an image are disclosed. A specific embodiment of the processor includes: an off-chip memory, a communication circuit, a control circuit, and an array processor, wherein: the off-chip memory is configured for storing a to-be-scaled original image; the communication circuit is configured for receiving an image scaling instruction; the control circuit is configured for executing the image scaling instruction, and sending a calculation control signal to the array processor; and the array processor is configured for calculating in parallel channel values of N channels in a target pixel using N processing elements in the array processor under the control of the calculation control signal based on a width scaling factor, a height scaling factor, and channel values of N channels in extracted pixel data. The embodiment has improved the processing speed of an image scaling operation.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: February 16, 2021
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Yichen Tu, Jian Ouyang, Wei Qi, Yong Wang
  • Publication number: 20210034900
    Abstract: Embodiments of the present disclosure provide a method and apparatus for extracting image data in parallel from multiple convolution windows, a device, and a computer-readable storage medium. The method includes: dividing an image into multiple groups of convolution windows, where the multiple groups of convolution windows include a first group of convolution windows and a second group of convolution windows, and each group of convolution windows include multiple convolution windows. The method further includes extracting image data in parallel from multiple convolution windows in the first group of convolution windows by using multiple data processing units, and extracting, after the extraction of image data from the first group of convolution windows is completed, image data from multiple convolution windows in the second group of convolution windows in parallel by using the multiple data processing units.
    Type: Application
    Filed: March 3, 2020
    Publication date: February 4, 2021
    Inventors: Zihao Liang, Jian Ouyang
  • Publication number: 20210034644
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for reducing storage space of a parameter table. The method may include: storing the parameter table in a lookup table system configured to compute an output value of a non-linear function according to an input value of the non-linear function, the parameter table including only an index value associated with an input value on one side of a median in a domain of the non-linear function and a parameter value corresponding to the index value; determining, by using a corresponding relationship between the index value associated with the input value on one side and the parameter value corresponding to the index value, a parameter value corresponding to an index value associated with an input value on the other side; and computing the output value by using the input value on the other side and the determined corresponding parameter value.
    Type: Application
    Filed: March 10, 2020
    Publication date: February 4, 2021
    Inventors: Huimin LI, Jian OUYANG
  • Publication number: 20210026630
    Abstract: Embodiments of the present disclosure provide a method, executed by a computing device, for configuring a vector operation, an apparatus, a device, and a storage medium. The method includes obtaining information indicating at least one configurable vector operation parameter. The information indicating the at least one configurable vector operation parameter indicates a type and a value of the configurable vector operation parameter. The method further includes: based on the type and the value of the configurable vector operation parameter, configuring multiple vector operation circuits to enable each of the vector operation circuits to execute a target vector operation including two or more basic vector operations and defined based on the type and value of the configurable vector operation parameter.
    Type: Application
    Filed: July 23, 2020
    Publication date: January 28, 2021
    Inventors: Huimin LI, Peng WU, Jian OUYANG
  • Publication number: 20210004679
    Abstract: Presented herein are embodiments of an improved asymmetric quantization, which may generally be referred to as improved asymmetric quantization (IAQ) embodiments. IAQ embodiments combine the benefits of conventional asymmetric quantization and symmetric quantization but also provide additional computation efficiencies. Embodiments of IAQ adopt an asymmetric range of the weights of a neural network layer, so they circumvent the limitation of symmetric range of symmetric quantization. Moreover, the inference process of a neural network quantized by an IAQ embodiment is much faster than that of the neural network quantized by conventional asymmetric quantization by quantizing an offset value of each layer.
    Type: Application
    Filed: May 19, 2020
    Publication date: January 7, 2021
    Applicant: Baidu USA LLC
    Inventors: Yingzhen YANG, Zhibiao ZHAO, Baoxin ZHAO, Jun HUAN, Jian OUYANG, Yong WANG, Jiaxin SHI
  • Publication number: 20200409703
    Abstract: The present disclosure provides a method, an apparatus, a device, and a medium for processing a loop instruction set. The method includes: in response to obtaining a first start instruction of the loop instruction set, storing a first loop number related to the loop instruction set into a first register, and storing a value of a first program counter corresponding to a loop instruction following the first start instruction in the loop instruction set, into a second register. The method further includes: obtaining the loop instruction following the first start instruction in the loop instruction set for executing the loop instruction. The method further includes: in response to obtaining a first end instruction for indicating an end of the loop instruction set, determining a loop execution for the loop instruction set based on the first loop number and the value of the first program counter.
    Type: Application
    Filed: May 13, 2020
    Publication date: December 31, 2020
    Inventors: Kang AN, Xueliang DU, Jian OUYANG
  • Publication number: 20200353941
    Abstract: An automatic processing system, a system on chip and a method for monitoring a processing module are described herein. The automatic driving processing system comprises: an automatic driving processing module, configured for receiving an input data stream and processing the input data stream based on a deep learning model so as to generate a processing result; a fault detection module, configured for generating a control signal and a fault detection stimulating data stream, and receiving the processing result from the automatic driving processing module; and a multi-way selection module, configured for receiving an automatic driving data stream as well as the control signal and the fault detection stimulating data stream, and selectively outputting the automatic driving data stream or the fault detection stimulating data stream to the automatic driving processing module based on the control signal, as an input data stream.
    Type: Application
    Filed: December 11, 2019
    Publication date: November 12, 2020
    Inventors: Chonggin Wang, Zhibiao Zhao, Hefei Zhu, Ningyi Xu, Jian Ouyang
  • Publication number: 20200218821
    Abstract: According to one embodiment, a system establishes a secure connection between a host system and a data processing (DP) accelerator over a bus, the secure connection including one or more data channels. The system transmits a first instruction from the host system to the DP accelerator over a command channel, the first instruction requesting the DP accelerator to perform a data preparation operation. The system receives a first request to read a first data from a first memory location of the host system from the DP accelerator over one data channel. In response to the request, the system transmits the first data to the DP accelerator over the data channel, where the first data is utilized for a computation or a configuration operation. The system transmits a second instruction from the host system to the DP accelerator over the command channel to perform the computation or the configuration operation.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 9, 2020
    Inventors: Yong LIU, Yueqiang CHENG, Jian OUYANG, Tao WEI
  • Publication number: 20200159461
    Abstract: Embodiments of the present disclosure provide a data accessing method, a device and a storage medium. The method includes: obtaining a first accessing request and a second accessing request for a storage device; loading first data associated with the first accessing request from a source device to a pre-allocated buffer area with a size same as a size of a single physical storage block of the storage device; determining a first part of the second data when the first size of second data associated with the second accessing request is greater than or equal to the second size of an available space of the buffer area, a size of the first part being the same as the second size; and providing the first data and the first part to a target device associated with the first accessing request and the second accessing request.
    Type: Application
    Filed: November 20, 2019
    Publication date: May 21, 2020
    Inventors: Zihao LIANG, Jian OUYANG
  • Patent number: 10607668
    Abstract: The present application discloses a data processing method and apparatus. A specific embodiment of the method includes: preprocessing received to-be-processed input data; obtaining a storage address of configuration parameters of the to-be-processed input data based on a result of the preprocessing and a result obtained by linearly fitting an activation function, the configuration parameters being preset according to curve characteristics of the activation function; acquiring the configuration parameters of the to-be-processed input data according to the storage address; and processing the result of the preprocessing of the to-be-processed input data based on the configuration parameters of the to-be-processed input data and a preset circuit structure, to obtain a processing result.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: March 31, 2020
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Jian Ouyang, Wei Qi, Yong Wang
  • Publication number: 20200050456
    Abstract: Embodiments of the present disclosure relate to a method for processing information, and a processor. The processor includes an arithmetic and logic unit, a bypass unit, a queue unit, a multiplexer, and a register file. The bypass unit includes a data processing subunit; the data processing subunit is configured to acquire at least one valid processing result outputted by the arithmetic and logic unit, determine a processing result from the at least one valid processing result, output the determined processing result to the multiplexer, and output processing results except for the determined processing result of among the at least one valid processing result to the queue unit; and the multiplexer is configured to sequentially output more than one valid processing results to the register file.
    Type: Application
    Filed: July 3, 2019
    Publication date: February 13, 2020
    Inventor: Jian Ouyang
  • Publication number: 20200050557
    Abstract: Disclosed are an apparatus for data processing, an artificial intelligence chip, and an electronic device. The apparatus for data processing includes: at least one input memory, at least one data conveying component, at least one multiplexed arbitration component, and at least one output memory. The input memory is connected to the data conveying component, the data conveying component is connected to the multiplexed arbitration component, and the multiplexed arbitration component is connected to the output memory.
    Type: Application
    Filed: July 9, 2019
    Publication date: February 13, 2020
    Inventors: Peng Wu, Jian Ouyang, Canghai Gu, Wei Qi, Ningyi Xu
  • Publication number: 20200050481
    Abstract: Disclosed are a computing method applied to an artificial intelligence chip and the artificial intelligence chip.
    Type: Application
    Filed: July 9, 2019
    Publication date: February 13, 2020
    Inventors: Jian Ouyang, Xueliang Du, Yingnan Xu, Huimin Li
  • Publication number: 20190164254
    Abstract: A processor and method for scaling an image are disclosed. A specific embodiment of the processor includes: an off-chip memory, a communication circuit, a control circuit, and an array processor, wherein: the off-chip memory is configured for storing a to-be-scaled original image; the communication circuit is configured for receiving an image scaling instruction; the control circuit is configured for executing the image scaling instruction, and sending a calculation control signal to the array processor; and the array processor is configured for calculating in parallel channel values of N channels in a target pixel using N processing elements in the array processor under the control of the calculation control signal based on a width scaling factor, a height scaling factor, and channel values of N channels in extracted pixel data. The embodiment has improved the processing speed of an image scaling operation.
    Type: Application
    Filed: February 1, 2019
    Publication date: May 30, 2019
    Inventors: Yichen Tu, Jian Ouyang, Wei Qi, Yong Wang
  • Publication number: 20190114202
    Abstract: The present disclosure provides a task scheduling method and apparatus of artificial intelligence heterogeneous hardware, a device and a readable medium. The method comprises: receiving a task execution request for a corresponding function sent from an API, the task execution request carrying attribute information of the task; obtaining a priority of the task according to attribute information of the task, wherein a priority of an online service is higher than a priority of an offline task; inserting the corresponding task into a scheduling queue of a corresponding function according to the priority of the task; tasks in the scheduling queue being arranged in a descending order of priorities; controlling in turn a free computing unit in a plurality of computing units of the corresponding function to execute the corresponding task, in the descending order of priorities of the task in the scheduling queue.
    Type: Application
    Filed: October 12, 2018
    Publication date: April 18, 2019
    Inventors: Yong WANG, Jian OUYANG, Wei QI
  • Patent number: 10261796
    Abstract: A processor and a method for executing an instruction on a processor are provided. In the method, a to-be-executed instruction is fetched, the instruction including a source address field, a destination address field, an operation type field, and an operation parameter field; in at least one execution unit, an execution unit controlled by a to-be-generated control signal according to the operation type field is determined, a source address and a destination address of data operated by the execution unit are determined according to the source address field and the destination address field, and a data amount of the data operated by the execution unit controlled by the to-be-generated control signal is determined according to the operation parameter field; the control signal is generated; and the execution unit in the at least one execution unit is controlled by using the control signal.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: April 16, 2019
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD
    Inventors: Jian Ouyang, Wei Qi, Yong Wang
  • Patent number: 10189426
    Abstract: The present application discloses a method and apparatus for operating a field-programmable gate array (FPGA) board in a driverless vehicle. The method according to a specific embodiment includes: collecting driving scenario information on a driving scenario of the driverless vehicle; determining, based on the driving scenario information, a speed at which the driverless vehicle executes a computing operation in the driving scenario; comparing the speed with a speed threshold; switching a working mode of the FPGA board in the driverless vehicle executing the computing operation to reduce power consumption of the FPGA board, in response to the speed being lower than the speed threshold. This embodiment implements the adaptive adjustment of the working mode of the FPGA board, thereby reducing the overall power consumption.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: January 29, 2019
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Zhao Zhang, Jian Ouyang, Jing Wang, Peng Wu, Liang Gao, Yupeng Li