Patents by Inventor Zhenjiang Wang

Zhenjiang Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230409886
    Abstract: The present disclosure provides a method and apparatus for deconvolving feature data using convolution hardware. The method includes: reading a feature map and deconvolution kernel into on-chip memory, and padding zeroes to the feature map; determining convolution kernels based on the deconvolution kernel; removing a row and/or column of each convolution kernel whose elements all are invalid weights, to obtain an optimized convolution kernel, and removing a corresponding row and/or column in the zero-padded feature map to obtain an corresponding optimized feature map; convolving each optimized convolution kernel with corresponding optimized feature map using the multiply-add array, to obtain convolutional outputs; and interleaving and synthesizing the convolutional outputs to obtain an interleaving synthetic output including at least a deconvolutional output corresponding to the feature map and deconvolution kernel.
    Type: Application
    Filed: February 10, 2022
    Publication date: December 21, 2023
    Applicant: Beijing Horizon Robotics Technology Research and Development Co., Ltd.
    Inventors: Zhuoran ZHAO, Kai YU, Chang HUANG, Zhenjiang WANG, Jianjun LI, Delin LI, Yinan ZHANG
  • Publication number: 20230376732
    Abstract: A processing method includes: obtaining an input feature map; processing the input feature map by using a dilated convolution layer of the convolutional neural network, to obtain a plurality of local feature maps; obtaining a plurality of local output feature maps by performing zero padding on the plurality of local feature maps performing convolution processing on the plurality of zero-padded local feature maps; and fusing the plurality of local output feature maps, to obtain an output feature map processed by the dilated convolution layer. A plurality of consecutive local feature maps can be split from the input feature map. The local feature map can be performed with convolution processing by using a compact convolution kernel. Performing dilated convolution processing on the input feature map under a premise of not increasing computational complexity overcomes limitation of holes on a dilated convolution algorithm, and can realize data reuse between adjacent sliding windows.
    Type: Application
    Filed: March 30, 2023
    Publication date: November 23, 2023
    Applicant: Beijing Horizon Information Technology Co., Ltd.
    Inventors: Zhuoran ZHAO, Zhao GU, Delin LI, Jianjun LI, Zhenjiang WANG
  • Patent number: 11748250
    Abstract: This application discloses a data processing method and apparatus, an electronic device, and a storage medium. When execution is performed at an operation layer of a neural network model, based on a pre-stored buffer allocation relationship, a first address range for cyclic addressing is set for a first buffer corresponding to input data and a second address range for cyclic addressing is set for a second buffer corresponding to an output result. Subsequently, cyclic addressing can be performed in the first buffer based on the first address range for cyclic addressing, to read the input data for the operation layer; and cyclic addressing can be performed in the second buffer based on the second address range for cyclic addressing, to write the output result of the operation layer into the second buffer. In this way, efficiency of buffer utilization can be effectively improved, and further operation efficiency for the model is improved.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: September 5, 2023
    Assignee: Beijing Horizon Robotics Technology Research and Development Co., Ltd.
    Inventors: Jianjun Li, Meng Yao, Zhenjiang Wang, Yu Zhou
  • Patent number: 11581903
    Abstract: Disclosed are a data compression method, a computer-readable storage medium, and an electronic device. The method includes: converting each data in a to-be-compressed data set into binary data in a preset format; determining a to-be-compressed bit and a significant bit for the each data in the to-be-compressed data set based on a sequence of all bits of the binary data; determining a compression bit width corresponding to the to-be-compressed data set based on bit widths of the significant bits; compressing the each data in the to-be-compressed data set based on the compression bit width, to obtain a compressed data set; and generating attribute information of the compressed data set. According to the present disclosure, the significant bit can be determined based on the sequence of all bits without adjusting orders of the bits of the binary data, thereby simplifying a data compression process and improving efficiency of data compression.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: February 14, 2023
    Assignee: Beijing Horizon Information Technology Co., Ltd.
    Inventors: Zhenjiang Wang, Jianjun Li, Zhuoran Zhao, Chang Huang
  • Publication number: 20220351329
    Abstract: Embodiments of the present disclosure disclose an image processing method, method for generating instructions for image processing, and apparatuses therefor. The method includes: if an ROI image is obtained, splitting to obtain a plurality of image blocks based on a first image size supported by an image processing model, first split data, and the obtained ROI image, wherein each image size obtained by splitting the first image size based on the first split data matches a hardware output size of an image scaling module; performing image scaling on each image block to obtain scaled image blocks, wherein each image size of the scaled image blocks is consistent with a respective image size; and inputting all scaled image blocks to the image processing model sequentially. In the embodiments, although output of image scaling module is limited, subsequent processes involved in the visual image processing technology may be properly executed.
    Type: Application
    Filed: November 24, 2020
    Publication date: November 3, 2022
    Inventors: Junzhi Shen, Zhenjiang Wang, Jianjun Li
  • Publication number: 20220197786
    Abstract: This application discloses a data processing method and apparatus, an electronic device, and a storage medium. When execution is performed at an operation layer of a neural network model, based on a pre-stored buffer allocation relationship, a first address range for cyclic addressing is set for a first buffer corresponding to input data and a second address range for cyclic addressing is set for a second buffer corresponding to an output result. Subsequently, cyclic addressing can be performed in the first buffer based on the first address range for cyclic addressing, to read the input data for the operation layer; and cyclic addressing can be performed in the second buffer based on the second address range for cyclic addressing, to write the output result of the operation layer into the second buffer. In this way, efficiency of buffer utilization can be effectively improved, and further operation efficiency for the model is improved.
    Type: Application
    Filed: November 29, 2021
    Publication date: June 23, 2022
    Inventors: Jianjun Li, Meng Yao, Zhenjiang Wang, Yu Zhou
  • Publication number: 20220182072
    Abstract: Embodiments of the present disclosure disclose a data compression method and apparatus, a computer-readable storage medium, and an electronic device. The method includes: converting each data in a to-be-compressed data set into binary data in a preset format; determining a to-be-compressed bit and a significant bit for the each data in the to-be-compressed data set based on a sequence of all bits of the binary data; determining a compression bit width corresponding to the to-be-compressed data set based on bit widths of the significant bits; compressing the each data in the to-be-compressed data set based on the compression bit width, to obtain a compressed data set; and generating attribute information of the compressed data set. According to the embodiments of the present disclosure, the significant bit can be determined based on the sequence of the all bits without adjusting orders of the bits of the binary data.
    Type: Application
    Filed: November 15, 2021
    Publication date: June 9, 2022
    Inventors: Zhenjiang Wang, Jianjun Li, Zhuoran Zhao, Chang Huang
  • Publication number: 20220076097
    Abstract: The present application discloses a neural network computation method includes determining the size of the first feature map obtained when the processor computes the present layer of the neural network before performing convolution computation on the next layer of the neural network; determining a convolution computation order of the next layer according to the size of the first feature map and the size of the second feature map for a convolution supported by the next layer; performing convolution computation instructions from the next layer based on the convolution computation order. Exemplary embodiments in the present disclosure decrease the interlayer feature map data access overhead and reduce the idle time of a computation unit by leaving out the storage of the first feature map and the loading process of the second feature map.
    Type: Application
    Filed: September 7, 2021
    Publication date: March 10, 2022
    Applicant: HORIZON (SHANGHAI) ARTIFICIAL INTELLIGENCE TECHNOLOGY CO., LTD.
    Inventors: Zhuoran ZHAO, Zhenjiang WANG
  • Patent number: 11163686
    Abstract: Disclosed are a method and an apparatus for accessing tensor data. The method may include determining a first row address in a first memory where one or more first data items to be accessed in a logical structure of the tensor data are stored at the first row address, copying data items at the first row address in the first memory to a first buffer row of a first buffer, moving each first data item in the first buffer row of the first buffer to a corresponding location at least in a first buffer row of a second buffer, and storing data items in the first buffer row of the second buffer into corresponding target locations in the second memory.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: November 2, 2021
    Assignee: Beijing Horizon Robotics Technology Research and Development Co., Ltd.
    Inventors: Chen Sun, Zhenjiang Wang, Liang Chen, Kun Ling
  • Patent number: 10936487
    Abstract: A method and apparatus are disclosed to perform the circular addressing to emulate a virtually unlimited memory space despite the fixed capacity of a physical memory by readdressing the portion of the data that exceeds the pre-defined length of the circular addressing region to another pre-defined address in the circular addressing region. Data segments in a data sample can be loaded and computed with recalculated circular addresses for different applications.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: March 2, 2021
    Inventors: Delin Li, Zhenjiang Wang, Wenhui Cao, Kun Lin, Liang Chen, Jianjun Li, Chang Huang
  • Publication number: 20200192803
    Abstract: Disclosed are a method and an apparatus for accessing tensor data. The method may include determining a first row address in a first memory where one or more first data items to be accessed in a logical structure of the tensor data are stored at the first row address, copying data items at the first row address in the first memory to a first buffer row of a first buffer, moving each first data item in the first buffer row of the first buffer to a corresponding location at least in a first buffer row of a second buffer, and storing data items in the first buffer row of the second buffer into corresponding target locations in the second memory.
    Type: Application
    Filed: December 16, 2019
    Publication date: June 18, 2020
    Inventors: Chen Sun, Zhenjiang Wang, Liang Chen, Kun Ling
  • Publication number: 20190294438
    Abstract: Systems and methods of data processing are provided. The method comprises receiving an input data to be processed by a series of operations, identifying a first operation from the series of operations, selecting at least one second operation from the series of operations to be grouped with the first operation based at least in part on an amount of an input data and an output data of the grouped operations and the capacity of the memory unit, and processing a portion of the input data of the grouped operations. An efficiency of the series of data operation can be improved by ensuring the input data and output data of any data operation are both stored in the memory unit.
    Type: Application
    Filed: March 22, 2019
    Publication date: September 26, 2019
    Inventors: Zhenjiang WANG, Jianjun LI, Liang CHEN, Kun LING, Delin LI, Chen SUN
  • Publication number: 20190278707
    Abstract: A method and apparatus are disclosed to perform the circular addressing to emulate a virtually unlimited memory space despite the fixed capacity of a physical memory by readdressing the portion of the data that exceeds the pre-defined length of the circular addressing region to another pre-defined address in the circular addressing region. Data segments in a data sample can be loaded and computed with recalculated circular addresses for different applications.
    Type: Application
    Filed: March 12, 2018
    Publication date: September 12, 2019
    Applicant: Beijing Horizon Information Technology Co., Ltd.
    Inventors: Delin Li, Zhenjiang Wang, Wenhui Cao, Kun Lin, Liang Chen, Jianjun Li, Chang Huang