Patents by Inventor Chen Change LOY

Chen Change LOY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11301726
    Abstract: An anchor determination includes: performing feature extraction on an image to be processed to obtain a first feature map of the image to be processed; performing anchor prediction on the first feature map via an anchor prediction network to obtain position information of anchors and shape information of the anchors in the first feature map, the position information of the anchors referring to information about positions in the first feature map where the anchors are generated. A corresponding anchor determination apparatus and a storage medium are also provided.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: April 12, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Kai Chen, Jiaqi Wang, Shuo Yang, Chen Change Loy, Dahua Lin
  • Patent number: 11301719
    Abstract: A semantic segmentation model training method includes: performing, by a semantic segmentation model, image semantic segmentation on at least one unlabeled image to obtain a preliminary semantic segmentation result as the category of the unlabeled image; obtaining, by a convolutional neural network based on the category of the at least one unlabeled image and the category of at least one labeled image, sub-images respectively corresponding to the at least two images and features corresponding to the sub-images, where the at least two images comprise the at least one unlabeled image and the at least one labeled image, and the at least two sub-images carry the categories of the corresponding images; and training the semantic segmentation model on the basis of the categories of the at least two sub-images and feature distances between the at least two sub-images.
    Type: Grant
    Filed: December 25, 2019
    Date of Patent: April 12, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xiaohang Zhan, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou Tang
  • Patent number: 11222211
    Abstract: A method and an apparatus for segmenting a video object, an electronic device, a storage medium, and a program include: performing, among at least some frames of a video, inter-frame transfer of an object segmentation result of a reference frame in sequence from the reference frame, to obtain an object segmentation result of at least one other frame among the at least some frames; determining other frames having lost objects with respect to the object segmentation result of the reference frame among the at least some frames; using the determined other frames as target frames to segment the lost objects, so as to update the object segmentation results of the target frames; and transferring the updated object segmentation results of the target frames to the at least one other frame in the video in sequence. The accuracy of video object segmentation results can therefore be improved.
    Type: Grant
    Filed: December 29, 2018
    Date of Patent: January 11, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Xiaoxiao Li, Yuankai Qi, Zhe Wang, Kai Chen, Ziwei Liu, Jianping Shi, Ping Luo, Chen Change Loy, Xiaoou Tang
  • Patent number: 11144800
    Abstract: An image disambiguation method includes: performing image feature extraction and semantic recognition on at least two images in an image set including similar targets to obtain N K-dimensional semantic feature probability vectors, where the image set includes N images, N and K are both positive integers, and N is greater than or equal to 2; determining a differential feature combination according to the N K-dimensional semantic feature probability vectors, the differential feature combination indicating a difference between the similar targets in the at least two images in the image set; and generating a natural language for representing or prompting the difference between the similar targets in the at least two images in the image set according to the differential feature combination and image features of the at least two images in the image set.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: October 12, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xiaoou Tang, Yining Li, Chen Huang, Chen Change Loy
  • Publication number: 20210295473
    Abstract: A method for image restoration, an electronic device and a computer storage medium are provided. The method includes that: region division is performed on an acquired image to obtain more than one sub-image; each sub-image is input into multiple paths of neural network and restored by using a restoration network determined for each sub-image; a restored image of each sub-image is output and obtained, so as to obtain a restored image of the acquired image.
    Type: Application
    Filed: June 8, 2021
    Publication date: September 23, 2021
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Ke YU, Xintao WANG, Chao DONG, Xiaoou TANG, Chen Change LOY
  • Publication number: 20210279892
    Abstract: An image processing method and a device, and a network training method and a device are provided. The image processing method includes determining a guide group arranged on an image to be processed and directed at a target object, the guide group comprising at least one guide point, and the guide point being used to indicate the position of a sampling pixel, and the magnitude and direction of the motion speed of the sampling pixel; and on the basis of the guide point in the guide group and the image to be processed, performing optical flow prediction to obtain the motion of the target object in the image to be processed.
    Type: Application
    Filed: May 25, 2021
    Publication date: September 9, 2021
    Inventors: Xiaohang ZHAN, Xingang PAN, Ziwei LIU, Dahua LIN, Chen Change LOY
  • Publication number: 20210241470
    Abstract: An image processing method includes: acquiring an image frame sequence, including a to-be-processed image frame and one or more image frames adjacent thereto, and performing image alignment on the to-be-processed image frame and each of image frames in the image frame sequence to obtain multiple pieces of aligned feature data; determining, based on the multiple pieces of alignment feature data, multiple similarity features each between a respective one of the multiple pieces of aligned feature data and aligned feature data corresponding to the to-be-processed image frame, and determining weight information of each of multiple pieces of aligned feature data based on the multiple similarity features; and fusing the multiple pieces of aligned feature data according to the weight information to obtain fusion information of the image frame sequence, the fusion information being configured to acquire a processed image frame corresponding to the to-be-processed image frame.
    Type: Application
    Filed: April 21, 2021
    Publication date: August 5, 2021
    Inventors: Xiaoou TANG, Xintao WANG, Zhuojie CHEN, Ke YU, Chao DONG, Chen Change LOY
  • Publication number: 20210224607
    Abstract: Provided are a method and apparatus for neutral network training and a method and apparatus for image generation. The method includes that: a first random vector is input to a generator to obtain a first generated image; the first generated image and a first real image are input to a discriminator to obtain a first discriminated distribution and a second discriminated distribution; a first network loss of the discriminator is determined based on the first discriminated distribution, the second discriminated distribution, a first target distribution and a second target distribution; a second network loss of the generator is determined based on the first discriminated distribution and the second discriminated distribution; and adversarial training is performed on the generator and the discriminator based on the first network loss and the second network loss.
    Type: Application
    Filed: April 2, 2021
    Publication date: July 22, 2021
    Inventors: Yubin DENG, Bo DAI, Yuanbo XIANGLI, Dahua LIN, Chen Change LOY
  • Patent number: 11049217
    Abstract: An example of the present disclosure provides methods, apparatuses and devices for magnifying a feature map, and a computer readable storage medium. The method includes: receiving a source feature map to be magnified; obtaining N reassembly kernels corresponding to each source position in the source feature map by performing convolution on the source feature map, wherein N refers to a square of a magnification factor of the source feature map; obtaining, for each of the reassembly kernels, a normalized reassembly kernel by performing normalization; obtaining, for each source position in the source feature map, N reassembly features corresponding to the source position by reassembling features of a reassembly region determined according to the source position with N normalized reassembly kernels corresponding to the source position; and generating a target feature map according to the N reassembly features corresponding to each source position in the source feature map.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: June 29, 2021
    Assignee: Beijing Sensetime Technology Development Co., Ltd.
    Inventors: Jiaqi Wang, Kai Chen, Rui Xu, Ziwei Liu, Chen Change Loy, Dahua Lin
  • Publication number: 20210104015
    Abstract: An example of the present disclosure provides methods, apparatuses and devices for magnifying a feature map, and a computer readable storage medium. The method includes: receiving a source feature map to be magnified; obtaining N reassembly kernels corresponding to each source position in the source feature map by performing convolution on the source feature map, wherein N refers to a square of a magnification factor of the source feature map; obtaining, for each of the reassembly kernels, a normalized reassembly kernel by performing normalization; obtaining, for each source position in the source feature map, N reassembly features corresponding to the source position by reassembling features of a reassembly region determined according to the source position with N normalized reassembly kernels corresponding to the source position; and generating a target feature map according to the N reassembly features corresponding to each source position in the source feature map.
    Type: Application
    Filed: December 15, 2020
    Publication date: April 8, 2021
    Inventors: Jiaqi WANG, Kai CHEN, Rui XU, Ziwei LIU, Chen Change LOY, Dahua LIN
  • Publication number: 20210097715
    Abstract: An image generation method and device, and a storage medium are provided. The method includes that: an image to be processed, first pose information corresponding to an initial pose of a first object in the image to be processed and second pose information corresponding to a target pose to be generated are acquired; pose switching information is obtained according to the first pose information and second pose information, the pose switching information including an optical flow map between the initial pose and the target pose and/or a visibility map of the target pose; and a first image is generated according to the image to be processed, the second pose information and the pose switching information.
    Type: Application
    Filed: December 10, 2020
    Publication date: April 1, 2021
    Inventors: Yining LI, Chen Huang, Chen Change Loy
  • Patent number: 10853916
    Abstract: A method and system for processing an image operates by: filtering a first real image to obtain a first feature map therefor with performances of image features improved; upscaling the obtained first feature map to improve a resolution thereof, the feature map with improved resolution forming a second feature map; and constructing, from the second feature map, a second real image having enhanced performances and a higher resolution than that of the first real image.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: December 1, 2020
    Assignee: SENSETIME GROUP LIMITED
    Inventors: Xiaoou Tang, Chao Dong, Tak Wai Hui, Chen Change Loy
  • Patent number: 10825187
    Abstract: The application relates to a method and system for tracking a target object in a video. The method includes: extracting, from the video, a 3-dimension (3D) feature block containing the target object; decomposing the extracted 3D feature block into a 2-dimension (2D) spatial feature map containing spatial information of the target object and a 2D spatial-temporal feature map containing spatial-temporal information of the target object; estimating, in the 2D spatial feature map, a location of the target object; determining, in the 2D spatial-temporal feature map, a speed and an acceleration of the target object; calibrating the estimated location of the target object according to the determined speed and acceleration; and tracking the target object in the video according to the calibrated location.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: November 3, 2020
    Assignee: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang Wang, Jing Shao, Chen-Change Loy, Kai Kang
  • Publication number: 20200250495
    Abstract: An anchor determination includes: performing feature extraction on an image to be processed to obtain a first feature map of the image to be processed; performing anchor prediction on the first feature map via an anchor prediction network to obtain position information of anchors and shape information of the anchors in the first feature map, the position information of the anchors referring to information about positions in the first feature map where the anchors are generated. A corresponding anchor determination apparatus and a storage medium are also provided.
    Type: Application
    Filed: April 21, 2020
    Publication date: August 6, 2020
    Inventors: Kai CHEN, Jiaqi Wang, Shuo Yang, Chen Change Loy, Dahua Lin
  • Patent number: 10699170
    Abstract: Disclosed is a method for generating a semantic image labeling model, comprising: forming a first CNN and a second CNN, respectively; randomly initializing the first CNN; inputting a raw image and predetermined label ground truth annotations to the first CNN to iteratively update weights thereof so that a category label probability for the image, which is output from the first CNN, approaches the predetermined label ground truth annotations; randomly initializing the second CNN; inputting the category label probability to the second CNN to correct the input category label probability so as to determine classification errors of the category label probabilities; updating the second CNN by back-propagating the classification errors; concatenating the updated first and second CNNs; classifying each pixel in the raw image into one of general object categories; and back-propagating classification errors through the concatenated CNN to update weights thereof until the classification errors less than a predetermined
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: June 30, 2020
    Assignee: Beijing SenseTime Technology Development Co., Ltd.
    Inventors: Xiaoou Tang, Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy
  • Publication number: 20200134375
    Abstract: A semantic segmentation model training method includes: performing, by a semantic segmentation model, image semantic segmentation on at least one unlabeled image to obtain a preliminary semantic segmentation result as the category of the unlabeled image; obtaining, by a convolutional neural network based on the category of the at least one unlabeled image and the category of at least one labeled image, sub-images respectively corresponding to the at least two images and features corresponding to the sub-images, where the at least two images comprise the at least one unlabeled image and the at least one labeled image, and the at least two sub-images carry the categories of the corresponding images; and training the semantic segmentation model on the basis of the categories of the at least two sub-images and feature distances between the at least two sub-images.
    Type: Application
    Filed: December 25, 2019
    Publication date: April 30, 2020
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xiaohang ZHAN, Ziwei LIU, Ping LUO, Chen Change LOY, Xiaoou TANG
  • Patent number: 10579876
    Abstract: A method for identifying social relation of persons in an image, including: generating face regions for faces of the persons in the image; determining at least one spatial cue for each of the faces; extracting features related to social relation for each face from the face regions; determining a shared facial feature from the extracted features and the determined spatial cue, the determined feature being shared by multiple the social relation inferences; and predicting the social relation of the persons from the shared facial feature.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: March 3, 2020
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Xiaoou Tang, Zhanpeng Zhang, Ping Luo, Chen Change Loy
  • Publication number: 20200057925
    Abstract: An image disambiguation method includes: performing image feature extraction and semantic recognition on at least two images in an image set including similar targets to obtain N K-dimensional semantic feature probability vectors, where the image set includes N images, N and K are both positive integers, and N is greater than or equal to 2; determining a differential feature combination according to the N K-dimensional semantic feature probability vectors, the differential feature combination indicating a difference between the similar targets in the at least two images in the image set; and generating a natural language for representing or prompting the difference between the similar targets in the at least two images in the image set according to the differential feature combination and image features of the at least two images in the image set.
    Type: Application
    Filed: October 24, 2019
    Publication date: February 20, 2020
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xiaoou TANG, Yining Li, Chen Huang, Chen Change Loy
  • Publication number: 20190138816
    Abstract: A method and an apparatus for segmenting a video object, an electronic device, a storage medium, and a program include: performing, among at least some frames of a video, inter-frame transfer of an object segmentation result of a reference frame in sequence from the reference frame, to obtain an object segmentation result of at least one other frame among the at least some frames; determining other frames having lost objects with respect to the object segmentation result of the reference frame among the at least some frames; using the determined other frames as target frames to segment the lost objects, so as to update the object segmentation results of the target frames; and transferring the updated object segmentation results of the target frames to the at least one other frame in the video in sequence. The accuracy of video object segmentation results can therefore be improved.
    Type: Application
    Filed: December 29, 2018
    Publication date: May 9, 2019
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Xiaoxiao LI, Yuankai Qi, Zhe Wang, Kai Chen, Ziwei Liu, Jianping Shi, Ping Luo, Chen Change Loy, Xiaoou Tang
  • Publication number: 20190043205
    Abstract: The application relates to a method and system for tracking a target object in a video. The method includes: extracting, from the video, a 3-dimension (3D) feature block containing the target object; decomposing the extracted 3D feature block into a 2-dimension (2D) spatial feature map containing spatial information of the target object and a 2D spatial-temporal feature map containing spatial-temporal information of the target object; estimating, in the 2D spatial feature map, a location of the target object; determining, in the 2D spatial-temporal feature map, a speed and an acceleration of the target object; calibrating the estimated location of the target object according to the determined speed and acceleration; and tracking the target object in the video according to the calibrated location.
    Type: Application
    Filed: October 11, 2018
    Publication date: February 7, 2019
    Applicant: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang WANG, Jing SHAO, Chen-Change LOY, Kai KANG