Patents by Inventor Zhenjiang Wang
Zhenjiang Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230409886Abstract: The present disclosure provides a method and apparatus for deconvolving feature data using convolution hardware. The method includes: reading a feature map and deconvolution kernel into on-chip memory, and padding zeroes to the feature map; determining convolution kernels based on the deconvolution kernel; removing a row and/or column of each convolution kernel whose elements all are invalid weights, to obtain an optimized convolution kernel, and removing a corresponding row and/or column in the zero-padded feature map to obtain an corresponding optimized feature map; convolving each optimized convolution kernel with corresponding optimized feature map using the multiply-add array, to obtain convolutional outputs; and interleaving and synthesizing the convolutional outputs to obtain an interleaving synthetic output including at least a deconvolutional output corresponding to the feature map and deconvolution kernel.Type: ApplicationFiled: February 10, 2022Publication date: December 21, 2023Applicant: Beijing Horizon Robotics Technology Research and Development Co., Ltd.Inventors: Zhuoran ZHAO, Kai YU, Chang HUANG, Zhenjiang WANG, Jianjun LI, Delin LI, Yinan ZHANG
-
Publication number: 20230376732Abstract: A processing method includes: obtaining an input feature map; processing the input feature map by using a dilated convolution layer of the convolutional neural network, to obtain a plurality of local feature maps; obtaining a plurality of local output feature maps by performing zero padding on the plurality of local feature maps performing convolution processing on the plurality of zero-padded local feature maps; and fusing the plurality of local output feature maps, to obtain an output feature map processed by the dilated convolution layer. A plurality of consecutive local feature maps can be split from the input feature map. The local feature map can be performed with convolution processing by using a compact convolution kernel. Performing dilated convolution processing on the input feature map under a premise of not increasing computational complexity overcomes limitation of holes on a dilated convolution algorithm, and can realize data reuse between adjacent sliding windows.Type: ApplicationFiled: March 30, 2023Publication date: November 23, 2023Applicant: Beijing Horizon Information Technology Co., Ltd.Inventors: Zhuoran ZHAO, Zhao GU, Delin LI, Jianjun LI, Zhenjiang WANG
-
Patent number: 11748250Abstract: This application discloses a data processing method and apparatus, an electronic device, and a storage medium. When execution is performed at an operation layer of a neural network model, based on a pre-stored buffer allocation relationship, a first address range for cyclic addressing is set for a first buffer corresponding to input data and a second address range for cyclic addressing is set for a second buffer corresponding to an output result. Subsequently, cyclic addressing can be performed in the first buffer based on the first address range for cyclic addressing, to read the input data for the operation layer; and cyclic addressing can be performed in the second buffer based on the second address range for cyclic addressing, to write the output result of the operation layer into the second buffer. In this way, efficiency of buffer utilization can be effectively improved, and further operation efficiency for the model is improved.Type: GrantFiled: November 29, 2021Date of Patent: September 5, 2023Assignee: Beijing Horizon Robotics Technology Research and Development Co., Ltd.Inventors: Jianjun Li, Meng Yao, Zhenjiang Wang, Yu Zhou
-
Patent number: 11581903Abstract: Disclosed are a data compression method, a computer-readable storage medium, and an electronic device. The method includes: converting each data in a to-be-compressed data set into binary data in a preset format; determining a to-be-compressed bit and a significant bit for the each data in the to-be-compressed data set based on a sequence of all bits of the binary data; determining a compression bit width corresponding to the to-be-compressed data set based on bit widths of the significant bits; compressing the each data in the to-be-compressed data set based on the compression bit width, to obtain a compressed data set; and generating attribute information of the compressed data set. According to the present disclosure, the significant bit can be determined based on the sequence of all bits without adjusting orders of the bits of the binary data, thereby simplifying a data compression process and improving efficiency of data compression.Type: GrantFiled: November 15, 2021Date of Patent: February 14, 2023Assignee: Beijing Horizon Information Technology Co., Ltd.Inventors: Zhenjiang Wang, Jianjun Li, Zhuoran Zhao, Chang Huang
-
Publication number: 20220351329Abstract: Embodiments of the present disclosure disclose an image processing method, method for generating instructions for image processing, and apparatuses therefor. The method includes: if an ROI image is obtained, splitting to obtain a plurality of image blocks based on a first image size supported by an image processing model, first split data, and the obtained ROI image, wherein each image size obtained by splitting the first image size based on the first split data matches a hardware output size of an image scaling module; performing image scaling on each image block to obtain scaled image blocks, wherein each image size of the scaled image blocks is consistent with a respective image size; and inputting all scaled image blocks to the image processing model sequentially. In the embodiments, although output of image scaling module is limited, subsequent processes involved in the visual image processing technology may be properly executed.Type: ApplicationFiled: November 24, 2020Publication date: November 3, 2022Inventors: Junzhi Shen, Zhenjiang Wang, Jianjun Li
-
Publication number: 20220197786Abstract: This application discloses a data processing method and apparatus, an electronic device, and a storage medium. When execution is performed at an operation layer of a neural network model, based on a pre-stored buffer allocation relationship, a first address range for cyclic addressing is set for a first buffer corresponding to input data and a second address range for cyclic addressing is set for a second buffer corresponding to an output result. Subsequently, cyclic addressing can be performed in the first buffer based on the first address range for cyclic addressing, to read the input data for the operation layer; and cyclic addressing can be performed in the second buffer based on the second address range for cyclic addressing, to write the output result of the operation layer into the second buffer. In this way, efficiency of buffer utilization can be effectively improved, and further operation efficiency for the model is improved.Type: ApplicationFiled: November 29, 2021Publication date: June 23, 2022Inventors: Jianjun Li, Meng Yao, Zhenjiang Wang, Yu Zhou
-
Publication number: 20220182072Abstract: Embodiments of the present disclosure disclose a data compression method and apparatus, a computer-readable storage medium, and an electronic device. The method includes: converting each data in a to-be-compressed data set into binary data in a preset format; determining a to-be-compressed bit and a significant bit for the each data in the to-be-compressed data set based on a sequence of all bits of the binary data; determining a compression bit width corresponding to the to-be-compressed data set based on bit widths of the significant bits; compressing the each data in the to-be-compressed data set based on the compression bit width, to obtain a compressed data set; and generating attribute information of the compressed data set. According to the embodiments of the present disclosure, the significant bit can be determined based on the sequence of the all bits without adjusting orders of the bits of the binary data.Type: ApplicationFiled: November 15, 2021Publication date: June 9, 2022Inventors: Zhenjiang Wang, Jianjun Li, Zhuoran Zhao, Chang Huang
-
Publication number: 20220076097Abstract: The present application discloses a neural network computation method includes determining the size of the first feature map obtained when the processor computes the present layer of the neural network before performing convolution computation on the next layer of the neural network; determining a convolution computation order of the next layer according to the size of the first feature map and the size of the second feature map for a convolution supported by the next layer; performing convolution computation instructions from the next layer based on the convolution computation order. Exemplary embodiments in the present disclosure decrease the interlayer feature map data access overhead and reduce the idle time of a computation unit by leaving out the storage of the first feature map and the loading process of the second feature map.Type: ApplicationFiled: September 7, 2021Publication date: March 10, 2022Applicant: HORIZON (SHANGHAI) ARTIFICIAL INTELLIGENCE TECHNOLOGY CO., LTD.Inventors: Zhuoran ZHAO, Zhenjiang WANG
-
Patent number: 11163686Abstract: Disclosed are a method and an apparatus for accessing tensor data. The method may include determining a first row address in a first memory where one or more first data items to be accessed in a logical structure of the tensor data are stored at the first row address, copying data items at the first row address in the first memory to a first buffer row of a first buffer, moving each first data item in the first buffer row of the first buffer to a corresponding location at least in a first buffer row of a second buffer, and storing data items in the first buffer row of the second buffer into corresponding target locations in the second memory.Type: GrantFiled: December 16, 2019Date of Patent: November 2, 2021Assignee: Beijing Horizon Robotics Technology Research and Development Co., Ltd.Inventors: Chen Sun, Zhenjiang Wang, Liang Chen, Kun Ling
-
Patent number: 10936487Abstract: A method and apparatus are disclosed to perform the circular addressing to emulate a virtually unlimited memory space despite the fixed capacity of a physical memory by readdressing the portion of the data that exceeds the pre-defined length of the circular addressing region to another pre-defined address in the circular addressing region. Data segments in a data sample can be loaded and computed with recalculated circular addresses for different applications.Type: GrantFiled: March 12, 2018Date of Patent: March 2, 2021Inventors: Delin Li, Zhenjiang Wang, Wenhui Cao, Kun Lin, Liang Chen, Jianjun Li, Chang Huang
-
Publication number: 20200192803Abstract: Disclosed are a method and an apparatus for accessing tensor data. The method may include determining a first row address in a first memory where one or more first data items to be accessed in a logical structure of the tensor data are stored at the first row address, copying data items at the first row address in the first memory to a first buffer row of a first buffer, moving each first data item in the first buffer row of the first buffer to a corresponding location at least in a first buffer row of a second buffer, and storing data items in the first buffer row of the second buffer into corresponding target locations in the second memory.Type: ApplicationFiled: December 16, 2019Publication date: June 18, 2020Inventors: Chen Sun, Zhenjiang Wang, Liang Chen, Kun Ling
-
Publication number: 20190294438Abstract: Systems and methods of data processing are provided. The method comprises receiving an input data to be processed by a series of operations, identifying a first operation from the series of operations, selecting at least one second operation from the series of operations to be grouped with the first operation based at least in part on an amount of an input data and an output data of the grouped operations and the capacity of the memory unit, and processing a portion of the input data of the grouped operations. An efficiency of the series of data operation can be improved by ensuring the input data and output data of any data operation are both stored in the memory unit.Type: ApplicationFiled: March 22, 2019Publication date: September 26, 2019Inventors: Zhenjiang WANG, Jianjun LI, Liang CHEN, Kun LING, Delin LI, Chen SUN
-
Publication number: 20190278707Abstract: A method and apparatus are disclosed to perform the circular addressing to emulate a virtually unlimited memory space despite the fixed capacity of a physical memory by readdressing the portion of the data that exceeds the pre-defined length of the circular addressing region to another pre-defined address in the circular addressing region. Data segments in a data sample can be loaded and computed with recalculated circular addresses for different applications.Type: ApplicationFiled: March 12, 2018Publication date: September 12, 2019Applicant: Beijing Horizon Information Technology Co., Ltd.Inventors: Delin Li, Zhenjiang Wang, Wenhui Cao, Kun Lin, Liang Chen, Jianjun Li, Chang Huang