Patents by Inventor Yasmin Jahir

Yasmin Jahir has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11620779
    Abstract: Described herein are methods and systems for remote visualization of real-time three-dimensional (3D) facial animation with synchronized voice. A sensor captures frames of a face of a person, each frame comprising color images of the face, depth maps of the face, voice data associated with the person, and a timestamp. The sensor generates a 3D face model of the person using the depth maps. A computing device receives the frames of the face and the 3D face model. The computing device preprocesses the 3D face model. For each frame, the computing device: detects facial landmarks using the color images; matches the 3D face model to the depth maps using non-rigid registration; updates a texture on a front part of the 3D face model using the color images; synchronizes the 3D face model with a segment of the voice data using the timestamp; and transmits the synchronized 3D face model and voice data to a remote device.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: April 4, 2023
    Assignee: VanGogh Imaging, Inc.
    Inventors: Xiang Zhang, Xin Hou, Ken Lee, Yasmin Jahir
  • Publication number: 20210375020
    Abstract: Described herein are methods and systems for remote visualization of real-time three-dimensional (3D) facial animation with synchronized voice. A sensor captures frames of a face of a person, each frame comprising color images of the face, depth maps of the face, voice data associated with the person, and a timestamp. The sensor generates a 3D face model of the person using the depth maps. A computing device receives the frames of the face and the 3D face model. The computing device preprocesses the 3D face model. For each frame, the computing device: detects facial landmarks using the color images; matches the 3D face model to the depth maps using non-rigid registration; updates a texture on a front part of the 3D face model using the color images; synchronizes the 3D face model with a segment of the voice data using the timestamp; and transmits the synchronized 3D face model and voice data to a remote device.
    Type: Application
    Filed: December 31, 2020
    Publication date: December 2, 2021
    Inventors: Xiang Zhang, Xin Hou, Ken Lee, Yasmin Jahir
  • Patent number: 11170552
    Abstract: Described herein are methods and systems for remote visualization of three-dimensional (3D) animation. A sensor of a mobile device captures scans of non-rigid objects in a scene, each scan comprising a depth map and a color image. A server receives a first set of scans from the mobile device and reconstructs an initial model of the non-rigid objects using the first set of scans. The server receives a second set of scans. For each scan in the second set of one or more scans, the server determines an initial alignment between the depth map and the initial model. The server converts the depth map into a coordinate system of the initial model, and determines a displacement between the depth map and the initial model. The server deforms the initial model to the depth map using the displacement, and applies a texture to at least a portion of the deformed model.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: November 9, 2021
    Assignee: VanGogh Imaging, Inc.
    Inventors: Xiang Zhang, Yasmin Jahir, Xin Hou, Ken Lee
  • Publication number: 20200357158
    Abstract: Described herein are methods and systems for remote visualization of three-dimensional (3D) animation. A sensor of a mobile device captures scans of non-rigid objects in a scene, each scan comprising a depth map and a color image. A server receives a first set of scans from the mobile device and reconstructs an initial model of the non-rigid objects using the first set of scans. The server receives a second set of scans. For each scan in the second set of one or more scans, the server determines an initial alignment between the depth map and the initial model. The server converts the depth map into a coordinate system of the initial model, and determines a displacement between the depth map and the initial model. The server deforms the initial model to the depth map using the displacement, and applies a texture to at least a portion of the deformed model.
    Type: Application
    Filed: May 5, 2020
    Publication date: November 12, 2020
    Inventors: Xiang Zhang, Yasmin Jahir, Xin Hou, Ken Lee
  • Patent number: 10380762
    Abstract: Described are methods and systems for generating a video stream of a scene including one or more objects. A sensor captures images of objects in a scene. A server coupled to the sensor, for each image, generates an initial 3D model for the objects and an initial 3D model of the scene. The server, for each image, captures pose information of the sensor as the sensor moves in relation to the scene or as the objects move in relation to the sensor. A viewing device receives the models and the pose information from the server. The viewing device captures pose information of the viewing device as the viewing device moves in relation to the scene. The viewing device renders a video stream on a display element using the received 3D models and at least one of the pose information of the sensor or the pose information of the viewing device.
    Type: Grant
    Filed: October 5, 2017
    Date of Patent: August 13, 2019
    Assignee: VanGogh Imaging, Inc.
    Inventors: Ken Lee, Yasmin Jahir, Xin Hou
  • Patent number: 10169676
    Abstract: Described herein are methods and systems for closed-form 3D model generation of non-rigid complex objects from scans with large holes. A computing device receives (i) a partial scan of a non-rigid complex object captured by a sensor coupled to the computing device; (ii) a partial 3D model corresponding to the object, and (iii) a whole 3D model corresponding to the object, wherein the partial 3D scan and the partial 3D model each includes one or more large holes. The device performs a rough match on the partial 3D model and changes the whole 3D model using the rough match to generate a deformed 3D model. The device refines the deformed 3D model using a deformation graph, reshapes the refined deformed 3D model to have greater detail, and adjusts the whole 3D model according to the reshaped 3D model to generate a closed-form 3D model that closes holes in the scan.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: January 1, 2019
    Assignee: VanGogh Imaging, Inc.
    Inventors: Xin Hou, Yasmin Jahir, Jun Yin
  • Publication number: 20180101966
    Abstract: Described are methods and systems for generating a video stream of a scene including one or more objects. A sensor captures images of objects in a scene. A server coupled to the sensor, for each image, generates an initial 3D model for the objects and an initial 3D model of the scene. The server, for each image, captures pose information of the sensor as the sensor moves in relation to the scene or as the objects move in relation to the sensor. A viewing device receives the models and the pose information from the server. The viewing device captures pose information of the viewing device as the viewing device moves in relation to the scene. The viewing device renders a video stream on a display element using the received 3D models and at least one of the pose information of the sensor or the pose information of the viewing device.
    Type: Application
    Filed: October 5, 2017
    Publication date: April 12, 2018
    Inventors: Ken Lee, Yasmin Jahir, Xin Hou
  • Publication number: 20170243397
    Abstract: Described herein are methods and systems for closed-form 3D model generation of non-rigid complex objects from scans with large holes. A computing device receives (i) a partial scan of a non-rigid complex object captured by a sensor coupled to the computing device; (ii) a partial 3D model corresponding to the object, and (iii) a whole 3D model corresponding to the object, wherein the partial 3D scan and the partial 3D model each includes one or more large holes. The device performs a rough match on the partial 3D model and changes the whole 3D model using the rough match to generate a deformed 3D model. The device refines the deformed 3D model using a deformation graph, reshapes the refined deformed 3D model to have greater detail, and adjusts the whole 3D model according to the reshaped 3D model to generate a closed-form 3D model that closes holes in the scan.
    Type: Application
    Filed: February 23, 2017
    Publication date: August 24, 2017
    Inventors: Xin Hou, Yasmin Jahir, Jun Yin
  • Patent number: 9715761
    Abstract: Methods and systems are described for generating a three-dimensional (3D) model of a fully-formed object represented in a noisy or partial scene. An image processing module of a computing device receives images captured by a sensor. The module generates partial 3D mesh models of physical objects in the scene based upon analysis of the images, and determines a location of at least one target object in the scene by comparing the images to one or more 3D reference models and extracting a 3D point cloud of the target object. The module matches the 3D point cloud of the target object to a selected 3D reference model based upon a similarity parameter, and detects one or more features of the target object. The module generates a fully formed 3D model of the target object using partial or noisy 3D points from the scene, extracts the detected features of the target object and features of the 3D reference models that correspond to the detected features, and calculates measurements of the detected features.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: July 25, 2017
    Assignee: VanGogh Imaging, Inc.
    Inventors: Ken Lee, Jun Yin, Xin Hou, Greg Werth, Yasmin Jahir
  • Publication number: 20150009214
    Abstract: Methods and systems are described for generating a three-dimensional (3D) model of a fully-formed object represented in a noisy or partial scene. An image processing module of a computing device receives images captured by a sensor. The module generates partial 3D mesh models of physical objects in the scene based upon analysis of the images, and determines a location of at least one target object in the scene by comparing the images to one or more 3D reference models and extracting a 3D point cloud of the target object. The module matches the 3D point cloud of the target object to a selected 3D reference model based upon a similarity parameter, and detects one or more features of the target object. The module generates a fully formed 3D model of the target object using partial or noisy 3D points from the scene, extracts the detected features of the target object and features of the 3D reference models that correspond to the detected features, and calculates measurements of the detected features.
    Type: Application
    Filed: July 7, 2014
    Publication date: January 8, 2015
    Inventors: Ken Lee, Jun Yin, Xin Hou, Greg Werth, Yasmin Jahir