Patents by Inventor Haiwei Liu
Haiwei Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240164022Abstract: A display module includes a display panel with a plurality of bonding electrodes arranged at intervals along a selected side edge of a non-display surface and divided into two bonding electrode groups, a first flexible circuit board and a second flexible circuit board. For the first flexible circuit board, each first conductive contact piece in a first wiring region is connected to a bonding electrode in a first bonding electrode group. For the second flexible circuit board, each second conductive contact piece in a second wiring region is connected to a bonding electrode in a second bonding electrode group. The first wiring region is closer to the selected side edge than the second wiring region in a first direction. The first fan-out region is spaced apart from the second wiring region in a second direction perpendicular to the first direction.Type: ApplicationFiled: November 1, 2021Publication date: May 16, 2024Applicants: BOE MLED Technology Co., Ltd., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Jing WANG, Mingming JIA, Lili WANG, Sha FENG, Chao LIU, Ming ZHAI, Haiwei SUN, Lingyun SHI, Liqiang WANG, Jingjing ZHANG
-
Patent number: 11977287Abstract: A display device is provided, including: a back plate; a plastic frame connected to the back plate; a backlight module; a privacy film on a light emitting side of the backlight module; and a dimming sheet on a side of privacy film away from the back plate, wherein the dimming sheet is configured to be capable of adjusting a viewing angle of the display device. The plastic frame includes a first surface and a side end face, the first surface is a surface of the plastic frame close to the dimming sheet, the side end face is a side face of the plastic frame close to the dimming sheet, the plastic frame further includes a chamfered portion at a transition position between the side end face and the first surface, and the chamfered portion has a rough surface that is configured for diffracting the light incident on the chamfered portion.Type: GrantFiled: January 8, 2021Date of Patent: May 7, 2024Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Heling Zhu, Jie Liu, Xin Li, Bo Han, Hai Tang, Jianwei Qin, Jian Sang, Lu Yu, Haiwei Sun
-
Patent number: 11886534Abstract: The present invention discloses a filtering method and system of parallel computing results, through simultaneously generating the input value of the first valid position fvp of each fragment, and simultaneously computing to obtain the output result corresponding to the input value of each first valid position fvp with the respective first valid position fvp of each fragment, and according to the output result of the first valid position fvp of the first fragment, the parallel computing results are filtered through the manner of selecting the output results of the second to the S-th fragments in sequence, to finally obtain correct parallel computing results. In the present invention, through adopting the manner of parallel filtering, the original serial filtering computation is changed to parallel computation of S fragments, the computing time is only one S-th of the original time, thereby improving the computing efficiency and satisfying the timing requirements of parallel computation.Type: GrantFiled: September 29, 2019Date of Patent: January 30, 2024Assignee: Inspur Electronic Information Industry Co., Ltd.Inventors: Hongzhi Shi, Haiwei Liu, Jian Zhao
-
Patent number: 11830244Abstract: An image recognition method and apparatus based on a systolic array, and a medium are disclosed. The method includes: converting obtained image feature information into a one-dimensional feature vector; converting an obtained weight matrix into a one-dimensional weight vector, and allocating a corresponding weight group to each node in a trained three-dimensional systolic array model; performing multiply-accumulate of the feature vector and a weight value on the one-dimensional feature vector in parallel by using the three-dimensional systolic array model, to obtain a feature value corresponding to each node, the feature values with different values reflecting article categories contained in an image; and determining an article category contained in the image according to the feature value corresponding to each node and a pre-established corresponding relationship between the feature value and the article category.Type: GrantFiled: April 26, 2021Date of Patent: November 28, 2023Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Gang Dong, Yaqian Zhao, Rengang Li, Hongbin Yang, Haiwei Liu, Dongdong Jiang
-
Patent number: 11803475Abstract: The present invention provides a method and apparatus for data caching. The method comprises: output matrixes are acquired one by one, a plurality of acquired output matrixes are written alternately into two queue sets of a first cache unit according to a sequence in which the output matrixes are acquired, and the output matrixes stored line by line in a first cache unit are written into a second cache unit one by one, according to the sequence in which the output matrixes are written into the second cache unit, valid data of each output matrix of the second cache unit is determined one by one according to preset parameters, and the valid data of each output matrix is written into a third cache unit, and the valid data of the output matrixes stored in the third cache unit are configured to be sequentially written into a memory according to a sequence in which the valid data are written into the third cache unit.Type: GrantFiled: November 28, 2019Date of Patent: October 31, 2023Assignee: INSPUR ELECTRONIC INFORMATION INDUSTRY CO., LTD.Inventors: Haiwei Liu, Gang Dong, Hongbin Yang, Yaqian Zhao, Rengang Li, Hongzhi Shi
-
Publication number: 20230326199Abstract: An image recognition method and apparatus based on a systolic array, and a medium are disclosed. The method includes: converting obtained image feature information into a one-dimensional feature vector; converting an obtained weight matrix into a one-dimensional weight vector, and allocating a corresponding weight group to each node in a trained three-dimensional systolic array model; performing multiply-accumulate of the feature vector and a weight value on the one-dimensional feature vector in parallel by using the three-dimensional systolic array model, to obtain a feature value corresponding to each node, the feature values with different values reflecting article categories contained in an image; and determining an article category contained in the image according to the feature value corresponding to each node and a pre-established corresponding relationship between the feature value and the article category.Type: ApplicationFiled: April 26, 2021Publication date: October 12, 2023Applicant: Inspur Suzhou Intelligent Technology Co., Ltd.Inventors: Gang DONG, Yaqian ZHAO, Rengang LI, Hongbin YANG, Haiwei LIU, Dongdong JIANG
-
Patent number: 11775803Abstract: A system for accelerating an RNN network including: a first cache, which is used for outputting Wx1 to WxN or Wh1 to WhN in parallel in N paths in a cyclic switching manner, and the degree of parallelism is k; a second cache, which is used for outputting xt or ht-1 in the cyclic switching manner; a vector multiplication circuit, which is used for, by using N groups of multiplication arrays, respectively calculating Wx1xt to WxNxt, or respectively calculating Wh1ht-1 to WhNht-1; an addition circuit, which is used for calculating Wx1xt+Wh1ht-1+b1 to WxNxt+WhNht-1+bN; an activation circuit, which is used for performing an activation operation according to an output of the addition circuit; a state updating circuit, which is used for acquiring ct-1, calculating ct and ht, updating ct-1, and sending ht to the second cache; a bias data cache; a vector cache; and a cell state cache.Type: GrantFiled: April 26, 2021Date of Patent: October 3, 2023Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Haiwei Liu, Gang Dong, Yaqian Zhao, Rengang Li, Dongdong Jiang, Hongbin Yang, Lingyan Liang
-
Publication number: 20230297846Abstract: A neural network compression method, apparatus and device, and a storage medium are provided. The method includes: performing forward inference on target data by using a target parameter sharing network to obtain an output feature map of the last convolutional module; extracting a channel related feature from the output feature map; inputting the extracted channel related feature and a target constraint condition into a target meta-generative network; and predicting an optimal network architecture under the target constraint condition by using the target meta-generative network to obtain a compressed neural network model. By using the technical solution, the computation load of a performance evaluation process of a neural architecture search may be reduced, and the speed of the searching for a high-performance neural network architecture may be increased.Type: ApplicationFiled: January 25, 2021Publication date: September 21, 2023Inventors: Wenfeng YIN, Gang DONG, Yaqian ZHAO, Qichun CAO, Lingyan LIANG, Haiwei LIU, Hongbin YANG
-
Publication number: 20230267740Abstract: A video data processing method, including: obtaining three-dimensional feature data and three-dimensional weight data corresponding to video data; separately preprocessing the three-dimensional feature data and the three-dimensional weight data to obtain a feature value matrix and a weight value matrix; and inputting the feature value matrix and the weight value matrix into a plurality of three-dimensional systolic arrays for parallel computing to obtain a video data processing result. The present method can fully expand the degree of parallelism of computation and a four-dimensional systolic computation architecture is constructed by using multiple three-dimensional systolic arrays, so as to perform parallel computing on a three-dimensional feature value matrix and a three-dimensional weight value matrix, thereby shortening the computation time of three-dimensional convolution, and improving the video data processing efficiency.Type: ApplicationFiled: April 26, 2021Publication date: August 24, 2023Inventors: Gang DONG, Yaqian ZHAO, Rengang LI, Hongbin YANG, Haiwei LIU, Dongdong JIANG
-
Publication number: 20230252757Abstract: Disclosed is an image processing method. In the invention, weighted calculation is carried out on a first quantification threshold value obtained with a saturated mapping method and a second quantification threshold value obtained with an unsaturated mapping method, that is, two quantification threshold values are fused, and an obtained optimal quantification threshold value can be suitable for most activation output layers, and therefore, effective information of the activation output layers can be more effectively reserved, the optimal quantification threshold value is used for subsequent image processing work, and the precision of reasoning and calculation of a quantized deep neural network on a low-bit-width hardware platform is improved. An image processing apparatus and device are disclosed, which have the same beneficial effects as the image processing method.Type: ApplicationFiled: September 29, 2019Publication date: August 10, 2023Inventors: Lingyan Liang, Gang Dong, Yaqian Zhao, Qichun Cao, Haiwei Liu, Hongbin Yang
-
Patent number: 11698730Abstract: A data storage method, apparatus, and device, and a readable storage medium. The method includes: after a random access memory is powered on, obtaining target data to be stored in a fixed storage address of the random access memory; determining a target transmission mode from a bit value change transmission mode and a bit value fixed transmission mode, wherein the target transmission mode is different from a historical transmission mode determined after the random access memory is powered on last time; and transmitting the target data from and to the random access memory according to the target transmission mode. The method can prevent data from being stolen after power-down of the target data, and guarantees the data security.Type: GrantFiled: January 22, 2021Date of Patent: July 11, 2023Assignee: INSPUR ELECTRONIC INFORMATION INDUSTRY CO., LTD.Inventors: Dongdong Jiang, Yaqian Zhao, Gang Dong, Rengang Li, Haiwei Liu, Hongbin Yang, Chen Li
-
Publication number: 20230196500Abstract: Provided are an image data storage method, an image data processing method and system, and a related apparatus. The image data processing method includes the following steps: sequentially storing image data in a dynamic random memory according to a preset storage format, so that adjacent pieces of image data in the dynamic random memory have continuous storage addresses; reading a preset number of pieces of multi-channel parallel image data from the dynamic random memory, and storing the multi-channel parallel image data in a first-in first-out memory of an FPGA; and executing a convolution operation on target image data in the first-in first-out memory to obtain image feature data. By means of the method, the image data processing rate can be increased.Type: ApplicationFiled: January 26, 2021Publication date: June 22, 2023Inventors: Dongdong JIANG, Yaqian ZHAO, Gang DONG, Rengang LI, Haiwei LIU, Hongbin YANG
-
Publication number: 20230196068Abstract: A system for accelerating an RNN network including: a first cache, which is used for outputting Wx1 to WxN or Wh1 to WhN in parallel in N paths in a cyclic switching manner, and the degree of parallelism is k; a second cache, which is used for outputting xt or ht-1 in the cyclic switching manner; a vector multiplication circuit, which is used for, by using N groups of multiplication arrays, respectively calculating Wx1xt to WxNxt, or respectively calculating Wh1ht-1 to WhNht-1; an addition circuit, which is used for calculating Wx1xt+Wh1ht-1+b1 to WxNxt+WhNht-1+bN; an activation circuit, which is used for performing an activation operation according to an output of the addition circuit; a state updating circuit, which is used for acquiring ct-1, calculating ct and ht, updating ct-1, and sending ht to the second cache; a bias data cache; a vector cache; and a cell state cache.Type: ApplicationFiled: April 26, 2021Publication date: June 22, 2023Inventors: Haiwei LIU, Gang DONG, Yaqian ZHAO, Rengang LI, Dongdong JIANG, Hongbin YANG, Lingyan LIANG
-
Publication number: 20230133665Abstract: A data storage method, apparatus, and device, and a readable storage medium. The method includes: after a random access memory is powered on, obtaining target data to be stored in a fixed storage address of the random access memory; determining a target transmission mode from a bit value change transmission mode and a bit value fixed transmission mode, wherein the target transmission mode is different from a historical transmission mode determined after the random access memory is powered on last time; and transmitting the target data from and to the random access memory according to the target transmission mode. The method can prevent data from being stolen after power-down of the target data, and guarantees the data security.Type: ApplicationFiled: January 22, 2021Publication date: May 4, 2023Inventors: Dongdong JIANG, Yaqian ZHAO, Gang DONG, Rengang LI, Haiwei LIU, Hongbin YANG, Chen LI
-
Publication number: 20220342824Abstract: The present invention provides a method and apparatus for data caching. The method comprises: output matrixes are acquired one by one, a plurality of acquired output matrixes are written alternately into two queue sets of a first cache unit according to a sequence in which the output matrixes are acquired, and the output matrixes stored line by line in a first cache unit are written into a second cache unit one by one, according to the sequence in which the output matrixes are written into the second cache unit, valid data of each output matrix of the second cache unit is determined one by one according to preset parameters, and the valid data of each output matrix is written into a third cache unit, and the valid data of the output matrixes stored in the third cache unit are configured to be sequentially written into a memory according to a sequence in which the valid data are written into the third cache unit.Type: ApplicationFiled: November 28, 2019Publication date: October 27, 2022Inventors: Haiwei Liu, Gang Dong, Hongbin Yang, Yaqian Zhao, Rengang Li, Hongzhi Shi
-
Publication number: 20220236994Abstract: A method and system for filtering a parallel computing result. The method comprises: simultaneously generating an input value of a first valid position (FVP) of each fragment, simultaneously calculating the input value of the FVP respectively corresponding to each fragment to obtain an output result corresponding to the input value of each FVP, sequentially selecting the approach of the output result of second to S-th fragments according to the output result of the FVP of each fragment, and filtering the parallel computing result to finally obtain the correct parallel computing result. The use of the parallel filtering approach changes the original serial filtering computing into the parallel computing of S fragments. The computing time is only one-S times of the original time, which can meet the time sequence requirement of the parallel computing while improving the computing efficiency.Type: ApplicationFiled: September 29, 2019Publication date: July 28, 2022Inventors: Hongzhi Shi, Haiwei Liu, Jian Zhao
-
Patent number: 9614611Abstract: The disclosure provides a method and an apparatus for increasing the capacity of an air interface. The method comprises the following steps that: after a traffic channel is established on a base station side, a base station transmitting a ? rate frame to the air interface in a continuous transmission mode if not capturing a traffic channel frame prefix from a terminal, so as to ensure that the terminal can receive a forward frame of the base station; the base station only reducing the transmission of the ? rate frame after capturing the prefix from the terminal; and the terminal keeping calling and does not release the call if continuous good frames are received during the demodulation of forward traffic frame from the base station. Through the disclosure, the interference between forward channels is reduced, and the capacity of the air interface is increased.Type: GrantFiled: July 16, 2012Date of Patent: April 4, 2017Assignee: ZTE CORPORATIONInventor: Haiwei Liu
-
Publication number: 20150162978Abstract: The disclosure provides a method and an apparatus for increasing the capacity of an air interface. The method comprises the following steps that: after a traffic channel is established on a base station side, a base station transmitting a ? rate frame to the air interface in a continuous transmission mode if not capturing a traffic channel frame prefix from a terminal, so as to ensure that the terminal can receive a forward frame of the base station; the base station only reducing the transmission of the ? rate frame after capturing the prefix from the terminal; and the terminal keeping calling and does not release the call if continuous good frames are received during the demodulation of forward traffic frame from the base station. Through the disclosure, the interference between forward channels is reduced, and the capacity of the air interface is increased.Type: ApplicationFiled: July 16, 2012Publication date: June 11, 2015Applicant: ZTE CORPORATIONInventor: Haiwei Liu