Patents by Inventor Chen Change LOY

Chen Change LOY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210295473
    Abstract: A method for image restoration, an electronic device and a computer storage medium are provided. The method includes that: region division is performed on an acquired image to obtain more than one sub-image; each sub-image is input into multiple paths of neural network and restored by using a restoration network determined for each sub-image; a restored image of each sub-image is output and obtained, so as to obtain a restored image of the acquired image.
    Type: Application
    Filed: June 8, 2021
    Publication date: September 23, 2021
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Ke YU, Xintao WANG, Chao DONG, Xiaoou TANG, Chen Change LOY
  • Publication number: 20210279892
    Abstract: An image processing method and a device, and a network training method and a device are provided. The image processing method includes determining a guide group arranged on an image to be processed and directed at a target object, the guide group comprising at least one guide point, and the guide point being used to indicate the position of a sampling pixel, and the magnitude and direction of the motion speed of the sampling pixel; and on the basis of the guide point in the guide group and the image to be processed, performing optical flow prediction to obtain the motion of the target object in the image to be processed.
    Type: Application
    Filed: May 25, 2021
    Publication date: September 9, 2021
    Inventors: Xiaohang ZHAN, Xingang PAN, Ziwei LIU, Dahua LIN, Chen Change LOY
  • Publication number: 20210241470
    Abstract: An image processing method includes: acquiring an image frame sequence, including a to-be-processed image frame and one or more image frames adjacent thereto, and performing image alignment on the to-be-processed image frame and each of image frames in the image frame sequence to obtain multiple pieces of aligned feature data; determining, based on the multiple pieces of alignment feature data, multiple similarity features each between a respective one of the multiple pieces of aligned feature data and aligned feature data corresponding to the to-be-processed image frame, and determining weight information of each of multiple pieces of aligned feature data based on the multiple similarity features; and fusing the multiple pieces of aligned feature data according to the weight information to obtain fusion information of the image frame sequence, the fusion information being configured to acquire a processed image frame corresponding to the to-be-processed image frame.
    Type: Application
    Filed: April 21, 2021
    Publication date: August 5, 2021
    Inventors: Xiaoou TANG, Xintao WANG, Zhuojie CHEN, Ke YU, Chao DONG, Chen Change LOY
  • Publication number: 20210224607
    Abstract: Provided are a method and apparatus for neutral network training and a method and apparatus for image generation. The method includes that: a first random vector is input to a generator to obtain a first generated image; the first generated image and a first real image are input to a discriminator to obtain a first discriminated distribution and a second discriminated distribution; a first network loss of the discriminator is determined based on the first discriminated distribution, the second discriminated distribution, a first target distribution and a second target distribution; a second network loss of the generator is determined based on the first discriminated distribution and the second discriminated distribution; and adversarial training is performed on the generator and the discriminator based on the first network loss and the second network loss.
    Type: Application
    Filed: April 2, 2021
    Publication date: July 22, 2021
    Inventors: Yubin DENG, Bo DAI, Yuanbo XIANGLI, Dahua LIN, Chen Change LOY
  • Publication number: 20210104015
    Abstract: An example of the present disclosure provides methods, apparatuses and devices for magnifying a feature map, and a computer readable storage medium. The method includes: receiving a source feature map to be magnified; obtaining N reassembly kernels corresponding to each source position in the source feature map by performing convolution on the source feature map, wherein N refers to a square of a magnification factor of the source feature map; obtaining, for each of the reassembly kernels, a normalized reassembly kernel by performing normalization; obtaining, for each source position in the source feature map, N reassembly features corresponding to the source position by reassembling features of a reassembly region determined according to the source position with N normalized reassembly kernels corresponding to the source position; and generating a target feature map according to the N reassembly features corresponding to each source position in the source feature map.
    Type: Application
    Filed: December 15, 2020
    Publication date: April 8, 2021
    Inventors: Jiaqi WANG, Kai CHEN, Rui XU, Ziwei LIU, Chen Change LOY, Dahua LIN
  • Publication number: 20200134375
    Abstract: A semantic segmentation model training method includes: performing, by a semantic segmentation model, image semantic segmentation on at least one unlabeled image to obtain a preliminary semantic segmentation result as the category of the unlabeled image; obtaining, by a convolutional neural network based on the category of the at least one unlabeled image and the category of at least one labeled image, sub-images respectively corresponding to the at least two images and features corresponding to the sub-images, where the at least two images comprise the at least one unlabeled image and the at least one labeled image, and the at least two sub-images carry the categories of the corresponding images; and training the semantic segmentation model on the basis of the categories of the at least two sub-images and feature distances between the at least two sub-images.
    Type: Application
    Filed: December 25, 2019
    Publication date: April 30, 2020
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xiaohang ZHAN, Ziwei LIU, Ping LUO, Chen Change LOY, Xiaoou TANG
  • Publication number: 20190043205
    Abstract: The application relates to a method and system for tracking a target object in a video. The method includes: extracting, from the video, a 3-dimension (3D) feature block containing the target object; decomposing the extracted 3D feature block into a 2-dimension (2D) spatial feature map containing spatial information of the target object and a 2D spatial-temporal feature map containing spatial-temporal information of the target object; estimating, in the 2D spatial feature map, a location of the target object; determining, in the 2D spatial-temporal feature map, a speed and an acceleration of the target object; calibrating the estimated location of the target object according to the determined speed and acceleration; and tracking the target object in the video according to the calibrated location.
    Type: Application
    Filed: October 11, 2018
    Publication date: February 7, 2019
    Applicant: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang WANG, Jing SHAO, Chen-Change LOY, Kai KANG
  • Publication number: 20180144193
    Abstract: A method for identifying social relation of persons in an image, including: generating face regions for faces of the persons in the image; determining at least one spatial cue for each of the faces; extracting features related to social relation for each face from the face regions; determining a shared facial feature from the extracted features and the determined spatial cue, the determined feature being shared by multiple the social relation inferences; and predicting the social relation of the persons from the shared facial feature.
    Type: Application
    Filed: December 29, 2017
    Publication date: May 24, 2018
    Inventors: Xiaoou TANG, Zhanpeng ZHANG, Ping LUO, Chen Change LOY
  • Publication number: 20170243097
    Abstract: The present invention discloses a system and a method for decoding QR codes in complex scenes, a system and a method for generating multi-layer color QR codes and applications enabled by the systems and methods. The method for decoding a QR code may include detecting rough locations of the QR code by a learning-based QR code detector which is pre-trained off-line; localizing finder patterns and alignment patterns within each detected location; correcting geometric distortion of each QR code, based on the localized finder patterns and alignment patterns; restoring a color of each data module within each corrected QR code by a learning-based classifier; and decoding the QR code from each restored QR code. The application also discloses a system and a method to determine the optimal setting parameters for creating a multi-layer QR code to fulfill the user's requirements. Some applications enabled by these systems and methods are also disclosed.
    Type: Application
    Filed: February 23, 2016
    Publication date: August 24, 2017
    Inventors: Chen Change LOY, Wing Cheong LAU, Zhibo YANG, Zhiyi CHENG, Chak Man LI