Patents by Inventor Dongqing Zou

Dongqing Zou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220020124
    Abstract: The present disclosure relates to an image processing method, an imaging processing device and a storage medium. The method includes: acquiring a blurry image exposed in exposure time and event data sampled in the exposure time, wherein the event data is configured to reflect a luminance change of a pixel point in the blurry image; determining a global event feature in the exposure time according to the event data; and determining a sharp image corresponding to the blurry image according to the blurry image, the event data, and the global event feature.
    Type: Application
    Filed: September 29, 2021
    Publication date: January 20, 2022
    Inventors: Zhe JIANG, Yu ZHANG, Dongqing ZOU, Sijie REN
  • Publication number: 20200349391
    Abstract: A method for training an image generation network, an electronic device and a storage medium are provided. The method includes: obtaining a sample image, where the sample image includes a first sample image and a second sample image corresponding to the first sample image; processing the first sample image based on an image generation network to obtain a predicted target image; determining a difference loss between the predicted target image and the second sample image; and training the image generation network based on the difference loss to obtain a trained image generation network.
    Type: Application
    Filed: April 24, 2020
    Publication date: November 5, 2020
    Inventors: Yu ZHANG, Dongqing Zou, Sijie Ren, Zhe Jiang, Xiaohao Chen
  • Patent number: 10769480
    Abstract: An object detection method and a neural network system for object detection are disclosed. The object detection method acquires a current frame of a sequence of frames representing an image sequence, and extracts a feature map of the current frame. The extracted feature map is pooled with information of a pooled feature map of a previous frame to thereby obtain a pooled feature map of the current frame. An object is detected from the pooled feature map of the current frame. A dynamic vision sensor (DVS) may be utilized to provide the sequence of frames. Improved object detection accuracy may be realized, particularly when object movement speed is slow.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: September 8, 2020
    Assignees: SAMSUNG ELECTRONICS CO., LTD., BEIJING SAMSUNG TELECOM R&D CENTER
    Inventors: Jia Li, Feng Shi, Weiheng Liu, Dongqing Zou, Qiang Wang, Hyunsurk Ryu, Keun Joo Park, Hyunku Lee
  • Patent number: 10672130
    Abstract: Disclosed is a co-segmentation method and apparatus for a three-dimensional model set, which includes: obtaining a super patch set for the three-dimensional model set which includes at least two three-dimensional models, each of the three-dimensional models including at least two super patches; obtaining a consistent affinity propagation model according to a first predefined condition and a conventional affinity propagation model, the consistent affinity propagation model being constraint by the first predefined condition which is position information for at least two super patches that are in the super patch set and belong to a common three-dimensional model set; converting the consistent affinity propagation model into a consistent convergence affinity propagation model; clustering the super patch set through the consistent convergence affinity propagation model to generate a co-segmentation outcome for the three-dimensional model set.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 2, 2020
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Xiaogang Wang, Zongji Wang, Qinping Zhao
  • Patent number: 10636164
    Abstract: The disclosure provides an object detection method and apparatus based on a Dynamic Vision Sensor (DVS). The method includes the following operations of: acquiring a plurality of image frames by a DVS; and, detecting the image frames by a recurrent coherent network to acquire a candidate box for objects to be detected, wherein the recurrent coherent network comprising a frame detection network model and a candidate graph model. By using a new recurrent coherent detection network, a bounding box for an object to be detected is fast detected from the data acquired by a DVS. The detection speed is improved greatly while ensuring the detection accuracy.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: April 28, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jia Li, Qiang Wang, Feng Shi, Deheng Qian, Dongqing Zou, Hyunsurk Eric Ryu, JinMan Park, Jingtao Xu, Keun Joo Park, Weiheng Liu
  • Patent number: 10582179
    Abstract: A method and apparatus for processing a binocular disparity image are provided. A method of determining a disparity of a binocular disparity image that includes a left eye image and a right eye image includes acquiring features of a plurality of pixels of the binocular disparity image based on an event distribution of the binocular disparity image, calculating a cost matrix of matching respective pixels between the left eye image and the right eye image based on the features, and determining a disparity of each matched pair of pixels based on the cost matrix.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: March 3, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dongqing Zou, Ping Guo, Qiang Wang, Baek Hwan Cho, Keun Joo Park
  • Patent number: 10552973
    Abstract: Exemplary embodiments provide an image vision processing method, device and equipment and relate to: determining parallax and depth information of event pixel points in a dual-camera frame image acquired by Dynamic Vision Sensors; determining multiple neighboring event pixel points of each non-event pixel point in the dual-camera frame image; determining, according to location information of each neighboring event pixel point of each non-event pixel point, depth information of the non-event pixel point; and performing processing according to the depth information of each pixel point in the dual-camera frame image. Since non-event pixel points are not required to participate in the matching of pixel points, even if it is difficult to distinguish between the non-event pixel points or the non-event pixel points are occluded, depth information of the non-event pixel points can be accurately determined according to the location information of neighboring event pixel points.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: February 4, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dongqing Zou, Feng Shi, Weiheng Liu, Deheng Qian, Hyunsurk Eric Ryu, Jia Li, Jingtao Xu, Keun Joo Park, Qiang Wang, Changwoo Shin
  • Patent number: 10341634
    Abstract: A method and apparatus for acquiring an image disparity are provided. The method may include acquiring, from dynamic vision sensors, a first image having a first view of an object and a second image having a second view of the object; calculating a cost within a preset disparity range of an event of first image and a corresponding event of the second image; calculating an intermediate disparity of the event of the first image and an intermediate disparity of the event of the second image based on the cost; determining whether the event of the first image is a matched event based on the intermediate disparity of the event of the first image and the intermediate disparity of the event of the second image; and predicting optimal disparities of all events of the first image based on an intermediate disparity of the matched event of the first imaged.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: July 2, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ping Guo, Dongqing Zou, Qiang Wang, Baek Hwan Cho, Keun Joo Park
  • Patent number: 10235571
    Abstract: The present invention provides a method for video matting via sparse and low-rank representation, which firstly selects frames which represent video characteristics in input video as keyframes, then trains a dictionary according to known pixels in the keyframes, next obtains a reconstruction coefficient satisfying the restriction of low-rank, sparse and non-negative according to the dictionary, and sets the non-local relationship matrix between each pixel in the input video according to the reconstruction coefficient, meanwhile sets the Laplace matrix between multiple frames, obtains a video alpha matte of the input video, according to ? values of the known pixels of the input video and ? values of sample points in the dictionary, the non-local relationship matrix and the Laplace matrix; and finally extracts a foreground object in the input video according to the video alpha matte, therefore improving quality of the extracted foreground object.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: March 19, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Guangying Cao, Xiaogang Wang
  • Publication number: 20190065885
    Abstract: An object detection method and a neural network system for object detection are disclosed. The object detection method acquires a current frame of a sequence of frames representing an image sequence, and extracts a feature map of the current frame. The extracted feature map is pooled with information of a pooled feature map of a previous frame to thereby obtain a pooled feature map of the current frame. An object is detected from the pooled feature map of the current frame. A dynamic vision sensor (DVS) may be utilized to provide the sequence of frames. Improved object detection accuracy may be realized, particularly when object movement speed is slow.
    Type: Application
    Filed: August 27, 2018
    Publication date: February 28, 2019
    Applicant: Beijing Samsung Telecom R&D Center
    Inventors: Jia Li, Feng Shi, Weiheng Liu, Dongqing Zou, Qiang Wang, Hyunsurk Ryu, Keun Joo Park, Hyunku Lee
  • Publication number: 20180240242
    Abstract: Disclosed is a co-segmentation method and apparatus for a three-dimensional model set, which includes: obtaining a super patch set for the three-dimensional model set which includes at least two three-dimensional models, each of the three-dimensional models including at least two super patches; obtaining a consistent affinity propagation model according to a first predefined condition and a conventional affinity propagation model, the consistent affinity propagation model being constraint by the first predefined condition which is position information for at least two super patches that are in the super patch set and belong to a common three-dimensional model set; converting the consistent affinity propagation model into a consistent convergence affinity propagation model; clustering the super patch set through the consistent convergence affinity propagation model to generate a co-segmentation outcome for the three-dimensional model set.
    Type: Application
    Filed: February 8, 2018
    Publication date: August 23, 2018
    Inventors: XIAOWU CHEN, DONGQING ZOU, XIAOGANG WANG, ZONGJI WANG, QINPING ZHAO
  • Patent number: 10049299
    Abstract: The invention discloses a deep learning based method for three dimensional (3D) model triangular facet feature learning and classifying and an apparatus. The method includes: constructing a deep convolutional neural network (CNN) feature learning model; training the deep CNN feature learning model; extracting a feature from, and constructing a feature vector for, a 3D model triangular facet having no class label, and reconstructing a feature in the constructed feature vector using a bag-of-words algorithm; determining an output feature corresponding to the 3D model triangular facet having no class label according to the trained deep CNN feature learning model and an initial feature corresponding to the 3D model triangular facet having no class label; and performing classification. The method enhances the capability to describe 3D model triangular facets, thereby ensuring the accuracy of 3D model triangular facet feature learning and classifying results.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Kan Guo, Dongqing Zou, Qinping Zhao
  • Patent number: 10049297
    Abstract: The invention provides a data driven method for transferring indoor scene layout and color style, including: preprocessing images in an indoor image data set, which includes manually labeling semantic information and layout information; obtaining indoor layout and color rules on the data set by learning algorithms; performing object-level semantic segmentation on input indoor reference image, or performing object-level and component-level segmentations using color segmentation methods, to extract layout constraints and color constraints of reference images, associating the reference images with indoor 3D scene via the semantic information; constructing a graph model for indoor reference image scene and indoor 3D scene to express indoor scene layout and color; performing similarity measurement on the indoor scene and searching for similar images in the data set to obtain an image sequence with gradient layouts from reference images to input 3D scene; performing image-sequence-guided layout and color transfer g
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: August 14, 2018
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Jianwei Li, Qing Li, Dongqing Zou, Bo Gao, Qinping Zhao
  • Publication number: 20180137647
    Abstract: The disclosure provides an object detection method and apparatus based on a Dynamic Vision Sensor (DVS). The method includes the following operations of: acquiring a plurality of image frames by a DVS; and, detecting the image frames by a recurrent coherent network to acquire a candidate box for objects to be detected, wherein the recurrent coherent network comprising a frame detection network model and a candidate graph model. By using a new recurrent coherent detection network, a bounding box for an object to be detected is fast detected from the data acquired by a DVS. The detection speed is improved greatly while ensuring the detection accuracy.
    Type: Application
    Filed: November 15, 2017
    Publication date: May 17, 2018
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jia LI, Qiang WANG, Feng SHI, Deheng QIAN, Dongqing ZOU, Hyunsurk Eric RYU, JinMan PARK, Jingtao XU, Keun Joo PARK, Weiheng LIU
  • Publication number: 20180137639
    Abstract: Exemplary embodiments provide an image vision processing method, device and equipment and relate to: determining parallax and depth information of event pixel points in a dual-camera frame image acquired by Dynamic Vision Sensors; determining multiple neighboring event pixel points of each non-event pixel point in the dual-camera frame image; determining, according to location information of each neighboring event pixel point of each non-event pixel point, depth information of the non-event pixel point; and performing processing according to the depth information of each pixel point in the dual-camera frame image. Since non-event pixel points are not required to participate in the matching of pixel points, even if it is difficult to distinguish between the non-event pixel points or the non-event pixel points are occluded, depth information of the non-event pixel points can be accurately determined according to the location information of neighboring event pixel points.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 17, 2018
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dongqing ZOU, Feng SHI, Weiheng LIU, Deheng QIAN, Hyunsurk Eric RYU, Jia LI, Jingtao XU, Keun Joo PARK, Qiang WANG, Changwoo SHIN
  • Publication number: 20180101752
    Abstract: The invention discloses a deep learning based method for three dimensional (3D) model triangular facet feature learning and classifying and an apparatus. The method includes: constructing a deep convolutional neural network (CNN) feature learning model; training the deep CNN feature learning model; extracting a feature from, and constructing a feature vector for, a 3D model triangular facet having no class label, and reconstructing a feature in the constructed feature vector using a bag-of-words algorithm; determining an output feature corresponding to the 3D model triangular facet having no class label according to the trained deep CNN feature learning model and an initial feature corresponding to the 3D model triangular facet having no class label; and performing classification. The method enhances the capability to describe 3D model triangular facets, thereby ensuring the accuracy of 3D model triangular facet feature learning and classifying results.
    Type: Application
    Filed: February 22, 2017
    Publication date: April 12, 2018
    Inventors: XIAOWU CHEN, KAN GUO, DONGQING ZOU, QINPING ZHAO
  • Publication number: 20170223332
    Abstract: A method and apparatus for acquiring an image disparity are provided. The method may include acquiring, from dynamic vision sensors, a first image having a first view of an object and a second image having a second view of the object; calculating a cost within a preset disparity range of an event of first image and a corresponding event of the second image; calculating an intermediate disparity of the event of the first image and an intermediate disparity of the event of the second image based on the cost; determining whether the event of the first image is a matched event based on the intermediate disparity of the event of the first image and the intermediate disparity of the event of the second image; and predicting optimal disparities of all events of the first image based on an intermediate disparity of the matched event of the first imaged.
    Type: Application
    Filed: September 27, 2016
    Publication date: August 3, 2017
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ping GUO, Dongqing ZOU, Qiang WANG, Baek Hwan CHO, Keun Joo PARK
  • Publication number: 20170223333
    Abstract: A method and apparatus for processing a binocular disparity image are provided. A method of determining a disparity of a binocular disparity image that includes a left eye image and a right eye image includes acquiring features of a plurality of pixels of the binocular disparity image based on an event distribution of the binocular disparity image, calculating a cost matrix of matching respective pixels between the left eye image and the right eye image based on the features, and determining a disparity of each matched pair of pixels based on the cost matrix.
    Type: Application
    Filed: October 31, 2016
    Publication date: August 3, 2017
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dongqing Zou, Ping Guo, Qiang Wang, Baek Hwan Cho, Keun Joo Park
  • Publication number: 20170116481
    Abstract: The present invention provides a method for video matting via sparse and low-rank representation, which firstly selects frames which represent video characteristics in input video as keyframes, then trains a dictionary according to known pixels in the keyframes, next obtains a reconstruction coefficient satisfying the restriction of low-rank, sparse and non-negative according to the dictionary, and sets the non-local relationship matrix between each pixel in the input video according to the reconstruction coefficient, meanwhile sets the Laplace matrix between multiple frames, obtains a video alpha matte of the input video, according to ? values of the known pixels of the input video and ? values of sample points in the dictionary, the non-local relationship matrix and the Laplace matrix; and finally extracts a foreground object in the input video according to the video alpha matte, therefore improving quality of the extracted foreground object.
    Type: Application
    Filed: June 14, 2016
    Publication date: April 27, 2017
    Inventors: XIAOWU CHEN, DONGQING ZOU, GUANGYING CAO, XIAOGANG WANG
  • Patent number: 9578312
    Abstract: A method of integrating binocular stereo video scenes with maintaining time consistency includes: propagating and extracting a contour of moving object of stereo video A; integrating and deformating of parallax between moving object and dynamic scene with time consistency; color blending of moving object and dynamic scene with time consistency where a method of median coordinate fusion is utilized. The method is simple and effective to utilize a small quantity of user interactions to successfully extract moving objects from stereo video which are same in time and as consistent as possible between left view and right view to develop multiple constraint conditions to guide the integration and deformation of parallax of moving object and dynamic scene and allow moving object to conform to the rules of perspective of dynamic scene. Moreover, the deformation result of moving object is smooth and consistent and can avoid effectively the occurrence of “dithering” phenomenon.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: February 21, 2017
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Qinping Zhao, Feng Ding