Patents by Inventor Qi Yin

Qi Yin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10102421
    Abstract: A method for face recognition in the video comprises: performing feature extraction on a target face in multiple image frames in the video to generate multiple face feature vectors respectively corresponding to the target face in the multiple image frames; performing time sequence feature extraction on the plurality of face feature vectors to convert the plurality of face feature vectors into a feature vector of a predetermined dimension; and judging the feature vector of the predetermined dimension by using a classifier so as to recognize the target face.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: October 16, 2018
    Assignee: PINHOLE (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Erjin Zhou, Qi Yin
  • Patent number: 10080009
    Abstract: A method for obtaining images for three dimension (3D) reconstruction comprises: controlling brightness of each of at least two light sources which are spatially separated from each other to be changed periodically, wherein, among the at least two light sources, there is at least one light source having a period of brightness change different from those of the other light source(s), or the periods of the brightness change of the at least two light sources are the same, but among the at least two light sources, there is at least one light source having a phase of the brightness change different from those of the other light source(s); and using at least three cameras having different spatial positions to capture the images for the 3D reconstruction, respectively, wherein, among the at least three cameras, there is at least one camera starting exposure at a different time from the other cameras.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: September 18, 2018
    Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., PINHOLE (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Haoqiang Fan, Qi Yin
  • Publication number: 20180251728
    Abstract: The present invention relates to an AG-haESCs in which H19 DMR and IG-DMR are knocked out, a method for preparing the AG-haESCs, and use of the AG-haESCs in constructing a genetically modified semi-cloned animal and a library of a genetically modified semi-cloned animal. The AG-haESCs is capable of obtaining characteristics resembling a round spermatid, and upon injection into an oocyte, a viable SC mouse is stably obtained. The present invention is capable of being effectively used in multi-gene genetic manipulation, advancing the acquisition of animals with multiple genetic modifications.
    Type: Application
    Filed: July 2, 2015
    Publication date: September 6, 2018
    Applicant: SHANGHAI INSTITUTES FOR BIOLOGICAL SCIENCES CHINESE ACADEMY OF SCIENCES
    Inventors: JINSONG LI, YUXUAN WU, CUIQING ZHONG, QI YIN, ZHENFEI XIE, MEIZHU BAI
  • Publication number: 20180204057
    Abstract: An object detection method and an object detection apparatus are provided. The object detection method includes: mapping at least one image frame in an image sequence into a three dimensional physical space to obtain three dimensional coordinates of each pixel in the at least one image frame; extracting a foreground region in the at least one image frame; segmenting the foreground region into a set of blobs; and detecting, for each blob in the set of blobs, an object in the blob through a neural network based on the three dimensional coordinates of at least one predetermined reference point in the blob, to obtain an object detection result.
    Type: Application
    Filed: March 15, 2018
    Publication date: July 19, 2018
    Inventors: Gang Yu, Chao Li, Qizheng He, Qi Yin
  • Patent number: 9996743
    Abstract: Methods, systems, and media for detecting gaze locking are provided. In some embodiments, methods for gaze locking are provided, the methods comprising: receiving an input image including a face; locating a pair of eyes in the face of the input image; generating a coordinate frame based on the pair of eyes; identifying an eye region in the coordinate frame; generating, using a hardware processor, a feature vector based on values of pixels in the eye region; and determining whether the face is gaze locking based on the feature vector.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: June 12, 2018
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Brian Anthony Smith, Qi Yin, Shree Kumar Nayar
  • Patent number: 9940509
    Abstract: An object detection method and an object detection apparatus are provided. The object detection method comprises: mapping at least one image frame in an image sequence into a three dimensional physical space to obtain three dimensional coordinates of each pixel in the at least one image frame; extracting a foreground region in the at least one image frame; segmenting the foreground region into a set of blobs; and detecting, for each blob in the set of blobs, an object in the blob through a neural network based on the three dimensional coordinates of at least one predetermined reference point in the blob, to obtain an object detection result.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: April 10, 2018
    Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., PINHOLE (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Gang Yu, Chao Li, Qizheng He, Qi Yin
  • Patent number: 9940532
    Abstract: A liveness detection apparatus and a liveness detection method are provided. The liveness detection apparatus may comprise: a specific exhibiting device, for exhibiting a specific identification content; an image acquiring device, for acquiring image data of a target object to be recognized during the exhibition of the identification content; a processor, for determining whether there is a reflective region corresponding to the identification content in the acquired image data, determining a regional feature of the reflective region when there is the reflective region, to obtain a determination result, and recognizing whether the target object is a living body based on the determination result.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: April 10, 2018
    Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., PINHOLE (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Haoqiang Fan, Kai Jia, Qi Yin
  • Publication number: 20180032840
    Abstract: The embodiments of the present invention provide training and construction methods and apparatus of a neural network for object detection, an object detection method and apparatus based on a neural network and a neural network. The training method of the neural network for object detection, comprises: inputting a training image including a training object to the neural network to obtain a predicted bounding box of the training object; acquiring a first loss function according to a ratio of the intersection area to the union area of the predicted bounding box and a true bounding box, the true bounding box being a bounding box of the training object marked in advance in the training image; and adjusting parameters of the neural network by utilizing at least the first loss function to train the neural network.
    Type: Application
    Filed: July 26, 2017
    Publication date: February 1, 2018
    Inventors: Jiahui YU, Qi YIN
  • Patent number: 9875411
    Abstract: The present disclosure relates to a video monitoring method and a video monitoring system based on a depth video. The video monitoring method comprises: obtaining video data collected by a video collecting module; determining an object as a monitored target based on pre-set scene information and the video data; extracting characteristic information of the object; and determining predictive information of the object based on the characteristic information, wherein the video data comprises video data including the depth information.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: January 23, 2018
    Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., PINHOLE (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Gang Yu, Chao Li, Qizheng He, Qi Yin
  • Publication number: 20180005047
    Abstract: This application provides a video monitoring method and device. The video monitoring method includes: obtaining video data; inputting at least one frame in the video data into a first neural network to determine object amount information of each pixel dot in the at least one frame; and executing at least one of the following operations by using a second neural network: performing a smoothing operation based on the object amount information in the at least one frame to rectify the object amount information; determining object density information of each pixel dot in the at least one frame based on scene information and the object amount information; predicting object density information of each pixel dot in a to-be-predicted frame next to the at least one frame based on the scene information, the object amount information, and association information between the at least one frame and the to-be-predicted frame.
    Type: Application
    Filed: June 15, 2017
    Publication date: January 4, 2018
    Inventors: Gang YU, Chao LI, Qizheng HE, Muge CHEN, Yuxiang PENG, Qi YIN
  • Publication number: 20180005054
    Abstract: Provided are a driving assistance information generating method and device, and a driving assistance system, which relate to a field of vehicle driving assistance technique. The driving assistance information generating method comprises: obtaining a depth image acquired by a depth camera and a scene image acquired by an imaging camera, the depth camera and the imaging camera being mounted on a vehicle in a manner of being registered and matched with each other, and a coverage of the depth camera and a coverage of the imaging camera being at least partially overlapped; detecting positions of respective objects appearing in the scene image by using the depth image and the scene image; and generating driving assistance information of the vehicle based on positions of the respective objects.
    Type: Application
    Filed: February 27, 2017
    Publication date: January 4, 2018
    Inventors: Gang YU, Chao LI, Qizheng HE, Muge CHEN, Yuxiang PENG, Qi YIN
  • Publication number: 20170374268
    Abstract: There are provided a focusing point determining method and apparatus. The focusing point determining method comprises: obtaining a view-finding image within a view-finding coverage; identifying a significance area in the view-finding image; and extracting at least one focusing point from the identified significance area. By identifying the significance area in the view-finding image and extracting at least one focusing point from the identified significance area, the focusing point determining method and apparatus can ensure accuracy of a selected focusing point to a certain extent, so as to ensure accuracy of focusing.
    Type: Application
    Filed: February 27, 2017
    Publication date: December 28, 2017
    Inventors: Shuchang ZHOU, Cong YAO, Xinyu ZHOU, Weiran HE, Dieqiao FENG, Qi YIN
  • Publication number: 20170345181
    Abstract: This application provides a video monitoring method and process. The video monitoring method comprises: obtaining first and second video data of a scene being monitored; detecting at least one target object based on the first video data, and determining parameter information of the at least one target object in at least one frame of the first video data, the parameter information including a first position; determining, based on coordinate transforming relationship between the first and the second video data, a second position of the at least one target object in a corresponding frame in the second video data according to the first position; and extracting, based on the second video data, feature information of the at least one target object located in the second position, wherein orientations with which the first and the second video acquiring module acquire video data with respect to a ground plane are different.
    Type: Application
    Filed: May 17, 2017
    Publication date: November 30, 2017
    Inventors: Gang YU, Chao LI, Qizheng HE, Muge CHEN, Qi YIN
  • Publication number: 20170345146
    Abstract: The application provides a liveness detection method and a liveness detection system. The liveness detection method includes: obtaining first and second face image data of an object to be detected, and at least one of the first and the second face image data being a depth image; determining a first face region and a second face region, determining whether the first and the second face regions correspond to each other, and extracting, when it is determined that the first and the second face region corresponds to each other, a first and a second face image from the first and the second face region respectively; determining a first classification result for the extracted first face image and a second classification result for the extracted second face image; and determining, based on the first classification result and the second classification result, a detection result for the object to be detected.
    Type: Application
    Filed: May 16, 2017
    Publication date: November 30, 2017
    Inventors: Haoqiang FAN, Qi YIN
  • Publication number: 20170330333
    Abstract: A method and apparatus for detecting a movement direction of a target object are provided. The method comprises: acquiring a video sequence of the target object, and determining positions of the target object in a plurality of image frames of the video sequence; setting a plurality of base vectors within a coverage range of an image collection apparatus capturing the video sequence, wherein an angle between each of the base vectors and a reference vector set according to a specific scene being captured satisfies a predetermined condition; for each of the base vectors, determining a movement direction of the target object between each pair of adjacent image frames with respect to the base vector, and determining a general movement direction of the target object with respect to the base vector thereby; determining a movement direction of the target object according to the general movement directions of the target object.
    Type: Application
    Filed: December 14, 2016
    Publication date: November 16, 2017
    Inventors: Le LIN, Gang YU, Qi YIN
  • Publication number: 20170286788
    Abstract: The application provides a liveness detection method capable of implementing liveness detection, and a liveness detection system that employs the liveness detection method. The liveness detection method comprises: irradiating an object to be detected with structured light; obtaining first facial image data of the object to be detected under irradiation of the structured light; determining, based on the first facial image data, a detection parameter that indicates a sub-surface scattering intensity of the structured light on a face of the object to be detected; and determining, based on the detection parameter and a predetermined parameter threshold, whether the object to be detected is a living body.
    Type: Application
    Filed: January 20, 2017
    Publication date: October 5, 2017
    Inventors: Haoqiang FAN, Qi YIN
  • Publication number: 20170201734
    Abstract: A 3D image display method and a head-mounted device. The method including: building a 3D background model and setting a video image display area in the 3D background model; acquiring video image data, and projecting the video image data into the video image display area of the 3D background model; acquiring a display parameter of a head-mounted device, and according to the display parameter, performing image morphing processing of the 3D background model to which the video image data are projected to generate a first 3D video image corresponding to a left eye and a second 3D video image corresponding to a right eye; and displaying the first 3D video image and the second 3D video image in the video image display area after refracting them through two lenses, respectively. The above-mentioned technical solutions can solve the problem of the narrow 3D image display angle that in the existing head-mounted devices.
    Type: Application
    Filed: December 28, 2015
    Publication date: July 13, 2017
    Applicant: QINGDAO GOERTEK TECHNOLOGY CO., LTD.
    Inventors: Qi YIN, Hongwei ZHOU
  • Publication number: 20170193310
    Abstract: A method and an apparatus for measuring a speed of an object are provided, the method comprising: acquiring, for each image frame of at least two image frames in an image sequence, image information of the image frame, wherein the image information comprises depth information; detecting at least one object in the image frame based on the depth information to obtain an object detection result for the image frame; tracking the at least one detected object based on the object detection result of each of the at least two image frames to obtain an object tracking result; and calculating the speed of the at least one detected object based on the object tracking result and a time difference between the at least two image frames.
    Type: Application
    Filed: December 27, 2016
    Publication date: July 6, 2017
    Inventors: Gang YU, Chao LI, Qizheng HE, Muge CHEN, Yuxiang PENG, Qi YIN
  • Publication number: 20170193286
    Abstract: A method for face recognition in the video comprises: performing feature extraction on a target face in multiple image frames in the video to generate multiple face feature vectors respectively corresponding to the target face in the multiple image frames; performing time sequence feature extraction on the plurality of face feature vectors to convert the plurality of face feature vectors into a feature vector of a predetermined dimension; and judging the feature vector of the predetermined dimension by using a classifier so as to recognize the target face.
    Type: Application
    Filed: November 1, 2016
    Publication date: July 6, 2017
    Inventors: Erjin ZHOU, Qi YIN
  • Patent number: D801505
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: October 31, 2017
    Assignee: 3M INNOVATIVE PROPERTIES COMPANY
    Inventors: Qi Yin, Xinchun Tang, Ya Feng Shen, Pei Lv