Patents by Inventor Hongwei QIN

Hongwei QIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250040317
    Abstract: The present disclosure provides a display apparatus and a display panel. The display panel includes: a drive backplate, including a substrate and a drive circuit layer on the substrate, where the drive circuit layer includes drive circuit regions arranged at intervals and transparent regions around the drive circuit regions, the transparent region includes concave parts, and a number of layers in the respective concave parts of the drive circuit layer is less than a number of layers in the respective drive circuit regions; a side of the drive circuit regions of the drive circuit layer away from the substrate includes pads for connecting a light-emitting unit; a light-transmissive glue for filling the concave parts, and a surface of the light-transmissive glue away from the substrate is on a side of the pads away from the substrate. The present disclosure can improve display effect.
    Type: Application
    Filed: June 29, 2022
    Publication date: January 30, 2025
    Inventors: Yanan NIU, Yan QU, Dongni LIU, Jinqian WANG, Jing NIU, Tingting ZHOU, Shuang SUN, Bin QIN, Hongwei TIAN, Fangzhen ZHANG, Wei WANG
  • Publication number: 20240414345
    Abstract: An image compression method comprises: performing feature extraction on the target image to obtain a first feature map comprising a plurality of channels; grouping the channels of the first feature map to obtain a plurality of second feature maps; performing spatial context feature extraction on the second feature maps to determine first spatial redundancy features corresponding to the second feature maps; and performing channel context feature extraction on the second feature maps to determine first channel redundancy features corresponding to the second feature maps; determining compression information corresponding to each of the second feature maps based on a first spatial redundancy feature and a first channel redundancy feature corresponding to each of the second feature maps and thus determining first compressed data corresponding to the target image, and performing deep compression processing based on the first feature map to determine second compressed data corresponding to the target image.
    Type: Application
    Filed: August 22, 2024
    Publication date: December 12, 2024
    Inventors: Dailan HE, Ziming YANG, Yan WANG, Hongwei QIN
  • Publication number: 20240195968
    Abstract: A video processing method, an electronic device and a storage medium are provided. The method includes: a target frame is determined in a video to be processed; at least two first energy images corresponding to the target frame are determined based on at least two preset macro block sizes respectively, each first energy image represents alternating current energy of first macro block(s) corresponding to a respective macro block size and the first macro block(s) are obtained by segmenting the target frame based on the respective macro block size; a first energy map corresponding to the target frame is determined based on the first energy images, the first energy map represents energy distribution in the target frame; and an adaptive quantization parameter corresponding to the target frame is determined based on the first energy map and the target frame is encoded using the adaptive quantization parameter.
    Type: Application
    Filed: February 19, 2024
    Publication date: June 13, 2024
    Inventors: Tongda XU, Chenjian Gao, Yan Wang, Tao Yuan, Hongwei Qin
  • Patent number: 11699240
    Abstract: A target tracking method includes: obtaining feature data of a reference frame of a first image frame, wherein the first image frame and at least one second image frame have the same reference frame; and determining the position of a tracking target in the first image frame based on the feature data of the reference frame. Based on the embodiments in the present disclosure, feature data of a reference frame of a first image frame is acquired, and the position of a tracking target in the first image frame is determined based on the feature data of the reference frame.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: July 11, 2023
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Shaohui Liu, Hongwei Qin
  • Patent number: 11403489
    Abstract: Provided are a target object processing method and apparatus, and a storage medium. The method includes: inputting first data into a first processing module to obtain a predicted data annotation result; inputting the data annotation result into a second processing module, and performing scene-adaptive incremental learning according to the data annotation result to obtain a neural network adapted to a scene of second data; and processing a scene corresponding to a target object according to data including the target object and the neural network.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: August 2, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Shixin Han, Yu Guo, Hongwei Qin, Yu Zhao
  • Patent number: 11334763
    Abstract: An image processing method includes: inputting a to-be-processed image into a neural network; and forming discrete feature data of the to-be-processed image via the neural network, where the neural network is trained based on guidance information, and during the training process, the neural network is taken as a student neural network; the guidance information includes: a difference between discrete feature data formed by a teacher neural network for an image sample and discrete feature data formed by the student neural network for the image sample.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: May 17, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Yi Wei, Hongwei Qin
  • Patent number: 11216955
    Abstract: Target tracking methods and apparatuses, electronic devices, and storage media are provided. The method includes: obtaining features of a plurality of reference images of a target image; determining a plurality of initial predicted positions of a tracking target in the target image based on the features of the plurality of reference images; and determining a final position of the tracking target in the target image based on the plurality of initial predicted positions.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: January 4, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Shaohui Liu, Hongwei Qin
  • Patent number: 10909380
    Abstract: A method and an apparatus for recognizing and training a video, an electronic device and a storage medium include: extracting features of a first key frame in a video; performing fusion on the features of the first key frame and fusion features of a second key frame in the video to obtain fusion features of the first key frame, where a detection sequence of the second key frame in the video precedes that of the first key frame; and performing detection on the first key frame according to the fusion features of the first key frame to obtain an object detection result of the first key frame. Through iterative multi-frame feature fusion, information contained in shared features of these key frames in the video can be enhanced, thereby improving frame recognition accuracy and video recognition efficiency.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: February 2, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Tangcongrui He, Hongwei Qin
  • Publication number: 20200311476
    Abstract: Provided are a target object processing method and apparatus, and a storage medium. The method includes: inputting first data into a first processing module to obtain a predicted data annotation result; inputting the data annotation result into a second processing module, and performing scene-adaptive incremental learning according to the data annotation result to obtain a neural network adapted to a scene of second data; and processing a scene corresponding to a target object according to data including the target object and the neural network.
    Type: Application
    Filed: June 15, 2020
    Publication date: October 1, 2020
    Inventors: Shixin HAN, Yu GUO, Hongwei QIN, Yu ZHAO
  • Publication number: 20200258242
    Abstract: A target tracking method includes: obtaining feature data of a reference frame of a first image frame, wherein the first image frame and at least one second image frame have the same reference frame; and determining the position of a tracking target in the first image frame based on the feature data of the reference frame. Based on the embodiments in the present disclosure, feature data of a reference frame of a first image frame is acquired, and the position of a tracking target in the first image frame is determined based on the feature data of the reference frame.
    Type: Application
    Filed: March 16, 2020
    Publication date: August 13, 2020
    Inventors: Shaohui LIU, Hongwei QIN
  • Publication number: 20200219268
    Abstract: Target tracking methods and apparatuses, electronic devices, and storage media are provided. The method includes: obtaining features of a plurality of reference images of a target image; determining a plurality of initial predicted positions of a tracking target in the target image based on the features of the plurality of reference images; and determining a final position of the tracking target in the target image based on the plurality of initial predicted positions.
    Type: Application
    Filed: March 18, 2020
    Publication date: July 9, 2020
    Inventors: Shaohui LIU, Hongwei Qin
  • Publication number: 20200184059
    Abstract: A face unlocking method includes: performing face detection on one or more images; performing face feature extraction on an image in which a face is detected; performing authentication on extracted face features based on stored face features, wherein the stored face features at least comprise face features of face images of at least two different angles corresponding to a same identity (ID); and performing an unlocking operation at least in response to the extracted face features passing the authentication.
    Type: Application
    Filed: February 13, 2020
    Publication date: June 11, 2020
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Liwei WU, Xiao JIN, Hongwei QIN, Rui ZHANG, Tianpeng BAO, Guanglu SONG, Xin SU, Junjie YAN
  • Publication number: 20200104642
    Abstract: An image processing method includes: inputting a to-be-processed image into a neural network; and forming discrete feature data of the to-be-processed image via the neural network, where the neural network is trained based on guidance information, and during the training process, the neural network is taken as a student neural network; the guidance information includes: a difference between discrete feature data formed by a teacher neural network for an image sample and discrete feature data formed by the student neural network for the image sample.
    Type: Application
    Filed: December 2, 2019
    Publication date: April 2, 2020
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Yi WEI, Hongwei QIN
  • Publication number: 20190266409
    Abstract: A method and an apparatus for recognizing and training a video, an electronic device and a storage medium include: extracting features of a first key frame in a video; performing fusion on the features of the first key frame and fusion features of a second key frame in the video to obtain fusion features of the first key frame, where a detection sequence of the second key frame in the video precedes that of the first key frame; and performing detection on the first key frame according to the fusion features of the first key frame to obtain an object detection result of the first key frame. Through iterative multi-frame feature fusion, information contained in shared features of these key frames in the video can be enhanced, thereby improving frame recognition accuracy and video recognition efficiency.
    Type: Application
    Filed: May 14, 2019
    Publication date: August 29, 2019
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Tangcongrui HE, Hongwei QIN