Patents by Inventor Jingyi Yu

Jingyi Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230273315
    Abstract: Described herein are systems and methods of capturing motions of humans in a scene. A plurality of IMU devices and a LiDAR sensor are mounted on a human. IMU data is captured by the IMU devices and LiDAR data is captured by the LiDAR sensor. Motions of the human are estimated based on the IMU data and the LiDAR data. A three-dimensional scene map is built based on the LiDAR data. An optimization is performed to obtain optimized motions of the human and optimized scene map.
    Type: Application
    Filed: August 9, 2022
    Publication date: August 31, 2023
    Inventors: Chenglu WEN, Yudi Dai, Lan Xu, Cheng Wang, Jingyi Yu
  • Patent number: 11727628
    Abstract: A method of rendering an object is provided. The method comprises: encoding a feature vector to each point in a point cloud for an object, wherein the feature vector comprises an alpha matte; projecting each point in the point cloud and the corresponding feature vector to a target view to compute a feature map; and using a neural rendering network to decode the feature map into a RGB image and the alpha matte and to update the feature vector.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: August 15, 2023
    Assignee: ShanghaiTech University
    Inventors: Cen Wang, Jingyi Yu
  • Publication number: 20230071559
    Abstract: A method of rendering an object is provided. The method comprises: encoding a feature vector to each point in a point cloud for an object, wherein the feature vector comprises an alpha matte; projecting each point in the point cloud and the corresponding feature vector to a target view to compute a feature map; and using a neural rendering network to decode the feature map into a RGB image and the alpha matte and to update the feature vector.
    Type: Application
    Filed: November 4, 2022
    Publication date: March 9, 2023
    Inventors: Cen WANG, Jingyi Yu
  • Publication number: 20230027234
    Abstract: An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.
    Type: Application
    Filed: September 23, 2022
    Publication date: January 26, 2023
    Inventors: Minye WU, Jingyi YU
  • Patent number: 11528427
    Abstract: An image capturing system includes a center point location, and a circular light source ring centered on the center point location. Light sources are on the circular light source ring, and each emit light in one of a number of spectral bandwidths. The image capturing system also includes circular camera rings, where each circular camera ring is centered on the center point location, where each circular camera ring includes camera locations which are equally spaced apart. The image capturing system also includes one or more cameras configured to capture images of an object from each of the camera locations of each particular circular camera ring, where the images captured from the camera locations of each particular circular camera ring are captured in one of the different spectral bandwidths, and where the object is illuminated by the light sources of the light source ring while each of the images are captured.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: December 13, 2022
    Inventors: Yu Ji, Mingyuan Zhou, Jingyi Yu
  • Patent number: 11410459
    Abstract: A method of detecting and recognizing faces using a light field camera array is provided. The method includes capturing multi-view color images using the light field camera array; obtaining a depth map; conducting light field rendering using a weight function comprising a depth component and a sematic component, where the weight function assigns a ray in the light field with a weight; and detecting and recognizing a face.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: August 9, 2022
    Assignee: ShanghaiTech University
    Inventors: Zhiru Shi, Minye Wu, Wenguang Ma, Jingyi Yu
  • Patent number: 11354840
    Abstract: A method and system of using multiple image cameras or multiple image and depth cameras to capture a target object. Geometry and texture are reconstructed using captured images and depth images. New images are rendered using geometry based rendering methods or image based rendering methods.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: June 7, 2022
    Inventor: Jingyi Yu
  • Publication number: 20220174224
    Abstract: An image capturing system includes a center point location, and a circular light source ring centered on the center point location. Light sources are on the circular light source ring, and each emit light in one of a number of spectral bandwidths. The image capturing system also includes circular camera rings, where each circular camera ring is centered on the center point location, where each circular camera ring includes camera locations which are equally spaced apart. The image capturing system also includes one or more cameras configured to capture images of an object from each of the camera locations of each particular circular camera ring, where the images captured from the camera locations of each particular circular camera ring are captured in one of the different spectral bandwidths, and where the object is illuminated by the light sources of the light source ring while each of the images are captured.
    Type: Application
    Filed: December 22, 2020
    Publication date: June 2, 2022
    Inventors: Yu JI, Mingyuan ZHOU, Jingyi YU
  • Publication number: 20210241462
    Abstract: According to some embodiments, an imaging processing method for extracting a plurality of planar surfaces from a depth map includes computing a depth change indication map (DCI) from a depth map in accordance with a smoothness threshold. The imaging processing method further includes recursively extracting a plurality of planar region from the depth map, wherein the size of each planar region is dynamically adjusted according to the DCI. The imaging processing method further includes clustering the extracted planar regions into a plurality of groups in accordance with a distance function; and growing each group to generate pixel-wise segmentation results and inlier points statistics simultaneously.
    Type: Application
    Filed: March 31, 2021
    Publication date: August 5, 2021
    Inventors: Ziran XING, Zhiru SHI, Yi MA, Jingyi YU
  • Publication number: 20210082096
    Abstract: A method of processing light field images for separating a transmitted layer from a reflection layer. The method comprises capturing a plurality of views at a plurality of viewpoints with different polarization angles; obtaining an initial disparity estimation for a first view using SIFT-flow, and warping the first view to a reference view; optimizing an objective function comprising a transmitted layer and a secondary layer using an Augmented Lagrange Multiplier (ALM) with Alternating Direction Minimizing (ADM) strategy; updating the disparity estimation for the first view; repeating the steps of optimizing the objective function and updating the disparity estimation until the change in the objective function between two consecutive iterations is below a threshold; and separating the transmitted layer and the secondary layer using the disparity estimation for the first view.
    Type: Application
    Filed: October 19, 2020
    Publication date: March 18, 2021
    Inventors: Minye WU, Zhiru SHI, Jingyi YU
  • Patent number: 10909752
    Abstract: The present invention relates to an all-around spherical light field rendering method, comprising: a preparation step, i.e., preparing to input and load related files; pre-computing a latticed depth map of positions of reference cameras which are densely covered on a sphere; moving a rendering camera, enabling the moving range of the rendering camera to be the surface of the sphere, and calculating and identifying the reference cameras surrounding the rendering camera around the rendering camera; performing back projection on pixels of the rendering camera, and performing depth test with the four reference cameras; and interpolating the reference cameras passing through the depth test, thereby obtaining a finally rendered pixel value. By means of the present invention, the rendering results can be rapidly seen in real time; an object can be observed from any angle on the spherical surface, and a real immersion feeling can be felt.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: February 2, 2021
    Assignee: PLEX-VR DIGITAL TECHNOLOGY (SHANGHAI) CO., LTD.
    Inventors: Jingyi Yu, Huangjie Yu
  • Publication number: 20200380770
    Abstract: The present invention relates to an all-around spherical light field rendering method, comprising: a preparation step, i.e., preparing to input and load related files; pre-computing a latticed depth map of positions of reference cameras which are densely covered on a sphere; moving a rendering camera, enabling the moving range of the rendering camera to be the surface of the sphere, and calculating and identifying the reference cameras surrounding the rendering camera around the rendering camera; performing back projection on pixels of the rendering camera, and performing depth test with the four reference cameras; and interpolating the reference cameras passing through the depth test, thereby obtaining a finally rendered pixel value. By means of the present invention, the rendering results can be rapidly seen in real time; an object can be observed from any angle on the spherical surface, and a real immersion feeling can be felt.
    Type: Application
    Filed: July 16, 2018
    Publication date: December 3, 2020
    Inventors: Jingyi Yu, Huangjie Yu
  • Patent number: 10789752
    Abstract: A method and system of using multiple image cameras or multiple image and depth cameras to capture a target object. Geometry and texture are reconstructed using captured images and depth images. New images are rendered using geometry based rendering methods or image based rendering methods.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: September 29, 2020
    Inventor: Jingyi Yu
  • Publication number: 20200302155
    Abstract: A method of detecting and recognizing faces using a light field camera array is provided. The method includes capturing multi-view color images using the light field camera array; obtaining a depth map; conducting light field rendering using a weight function comprising a depth component and a sematic component, where the weight function assigns a ray in the light field with a weight; and detecting and recognizing a face.
    Type: Application
    Filed: June 5, 2020
    Publication date: September 24, 2020
    Inventors: Zhiru SHI, Minye WU, Wengguang MA, Jingyi YU
  • Patent number: 10762654
    Abstract: A method of generating a three-dimensional model of an object is disclosed. The method may use a light field camera to capture a plurality of light field images at a plurality of viewpoints. The method may include capturing a first light field image at a first viewpoint; capturing a second light field image at the second viewpoint; estimating a rotation and a translation of a light field from the first viewpoint to the second viewpoint; obtaining a disparity map from each of the plurality of light field image; and computing a three-dimensional point cloud by optimizing the rotation and translation of the light field and the disparity map. The first light field image may include a first plurality of subaperture images and the second light field image may include a second plurality of subaperture images.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: September 1, 2020
    Assignee: SHANGHAITECH UNIVERSITY
    Inventor: Jingyi Yu
  • Publication number: 20200219301
    Abstract: A method and system of using multiple image cameras or multiple image and depth cameras to capture a target object. Geometry and texture are reconstructed using captured images and depth images. New images are rendered using geometry based rendering methods or image based rendering methods.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventor: Jingyi YU
  • Patent number: 10657664
    Abstract: A method and system for capturing images and depth for image-based rendering. Capture is made by a multi-camera configuration using a combination of image and depth cameras. Rendering utilizes scene geometry derived from image and depth data.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: May 19, 2020
    Inventor: Jingyi Yu
  • Publication number: 20200141804
    Abstract: A method for generating hyperspectral data-cubes based on a plurality of hyperspectral light field (H-LF) images is disclosed. Each H-LF image may have a different view and a different spectral band. The method may include calculating a magnitude histogram, a direction histogram, and an overlapping histogram of oriented gradient for a plurality of pixels; developing a spectral-invariant feature descriptor by combining the magnitude histogram, the direction histogram, and the overlapping histogram of oriented gradient; obtaining a correspondence cost of the H-LF images based on the spectral-invariable feature descriptor; performing H-LF stereo matching on the H-LF images to obtain a disparity map of a reference view; and generating hyperspectral data-cubes by using the disparity map of the reference view. A bin in the overlapping histogram of oriented gradient may comprise overlapping ranges of directions.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 7, 2020
    Inventor: Jingyi YU
  • Publication number: 20200145642
    Abstract: A method and apparatus for extracting depth information from a focal stack is disclosed. The method may include processing the focal stack through a focus convolutional neural network (Focus-Net) to generate a plurality of feature maps, stacking the plurality of feature maps together, and fusing the plurality of feature maps by a plurality of first convolutional layers to obtain a depth image. The Focus-Net includes a plurality of branches, and each branch includes a downsampling convolutional layer having a different stride for downsampling the focal stack and a deconvolutional layer for upsampling the focal stack.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 7, 2020
    Inventor: Jingyi Yu
  • Patent number: 10645281
    Abstract: A method for generating high resolution multi-spectral light fields is disclosed. The method may include capturing a multi-perspective spectral image which includes a plurality of sub-view images; aligning and warping the sub-view images to obtain low resolution multi-spectral light fields; obtaining a high resolution dictionary and a low resolution dictionary; obtaining a sparse representation based on the low resolution multi-spectral light fields and the low resolution dictionary; and generating high resolution multi-spectral light fields with the sparse representation and the high resolution directory. Each sub-view image is captured with a different perspective and a different spectral range. The multi-perspective spectral image is obtain with one exposure.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: May 5, 2020
    Assignee: ShanghaiTech University
    Inventor: Jingyi Yu