Patents by Inventor Jingyi Yu

Jingyi Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10909752
    Abstract: The present invention relates to an all-around spherical light field rendering method, comprising: a preparation step, i.e., preparing to input and load related files; pre-computing a latticed depth map of positions of reference cameras which are densely covered on a sphere; moving a rendering camera, enabling the moving range of the rendering camera to be the surface of the sphere, and calculating and identifying the reference cameras surrounding the rendering camera around the rendering camera; performing back projection on pixels of the rendering camera, and performing depth test with the four reference cameras; and interpolating the reference cameras passing through the depth test, thereby obtaining a finally rendered pixel value. By means of the present invention, the rendering results can be rapidly seen in real time; an object can be observed from any angle on the spherical surface, and a real immersion feeling can be felt.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: February 2, 2021
    Assignee: PLEX-VR DIGITAL TECHNOLOGY (SHANGHAI) CO., LTD.
    Inventors: Jingyi Yu, Huangjie Yu
  • Publication number: 20200380770
    Abstract: The present invention relates to an all-around spherical light field rendering method, comprising: a preparation step, i.e., preparing to input and load related files; pre-computing a latticed depth map of positions of reference cameras which are densely covered on a sphere; moving a rendering camera, enabling the moving range of the rendering camera to be the surface of the sphere, and calculating and identifying the reference cameras surrounding the rendering camera around the rendering camera; performing back projection on pixels of the rendering camera, and performing depth test with the four reference cameras; and interpolating the reference cameras passing through the depth test, thereby obtaining a finally rendered pixel value. By means of the present invention, the rendering results can be rapidly seen in real time; an object can be observed from any angle on the spherical surface, and a real immersion feeling can be felt.
    Type: Application
    Filed: July 16, 2018
    Publication date: December 3, 2020
    Inventors: Jingyi Yu, Huangjie Yu
  • Patent number: 10789752
    Abstract: A method and system of using multiple image cameras or multiple image and depth cameras to capture a target object. Geometry and texture are reconstructed using captured images and depth images. New images are rendered using geometry based rendering methods or image based rendering methods.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: September 29, 2020
    Inventor: Jingyi Yu
  • Publication number: 20200302155
    Abstract: A method of detecting and recognizing faces using a light field camera array is provided. The method includes capturing multi-view color images using the light field camera array; obtaining a depth map; conducting light field rendering using a weight function comprising a depth component and a sematic component, where the weight function assigns a ray in the light field with a weight; and detecting and recognizing a face.
    Type: Application
    Filed: June 5, 2020
    Publication date: September 24, 2020
    Inventors: Zhiru SHI, Minye WU, Wengguang MA, Jingyi YU
  • Patent number: 10762654
    Abstract: A method of generating a three-dimensional model of an object is disclosed. The method may use a light field camera to capture a plurality of light field images at a plurality of viewpoints. The method may include capturing a first light field image at a first viewpoint; capturing a second light field image at the second viewpoint; estimating a rotation and a translation of a light field from the first viewpoint to the second viewpoint; obtaining a disparity map from each of the plurality of light field image; and computing a three-dimensional point cloud by optimizing the rotation and translation of the light field and the disparity map. The first light field image may include a first plurality of subaperture images and the second light field image may include a second plurality of subaperture images.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: September 1, 2020
    Assignee: SHANGHAITECH UNIVERSITY
    Inventor: Jingyi Yu
  • Publication number: 20200219301
    Abstract: A method and system of using multiple image cameras or multiple image and depth cameras to capture a target object. Geometry and texture are reconstructed using captured images and depth images. New images are rendered using geometry based rendering methods or image based rendering methods.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventor: Jingyi YU
  • Patent number: 10657664
    Abstract: A method and system for capturing images and depth for image-based rendering. Capture is made by a multi-camera configuration using a combination of image and depth cameras. Rendering utilizes scene geometry derived from image and depth data.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: May 19, 2020
    Inventor: Jingyi Yu
  • Publication number: 20200145642
    Abstract: A method and apparatus for extracting depth information from a focal stack is disclosed. The method may include processing the focal stack through a focus convolutional neural network (Focus-Net) to generate a plurality of feature maps, stacking the plurality of feature maps together, and fusing the plurality of feature maps by a plurality of first convolutional layers to obtain a depth image. The Focus-Net includes a plurality of branches, and each branch includes a downsampling convolutional layer having a different stride for downsampling the focal stack and a deconvolutional layer for upsampling the focal stack.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 7, 2020
    Inventor: Jingyi Yu
  • Publication number: 20200141804
    Abstract: A method for generating hyperspectral data-cubes based on a plurality of hyperspectral light field (H-LF) images is disclosed. Each H-LF image may have a different view and a different spectral band. The method may include calculating a magnitude histogram, a direction histogram, and an overlapping histogram of oriented gradient for a plurality of pixels; developing a spectral-invariant feature descriptor by combining the magnitude histogram, the direction histogram, and the overlapping histogram of oriented gradient; obtaining a correspondence cost of the H-LF images based on the spectral-invariable feature descriptor; performing H-LF stereo matching on the H-LF images to obtain a disparity map of a reference view; and generating hyperspectral data-cubes by using the disparity map of the reference view. A bin in the overlapping histogram of oriented gradient may comprise overlapping ranges of directions.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 7, 2020
    Inventor: Jingyi YU
  • Patent number: 10643305
    Abstract: A method of compressing a stereoscopic video including a left view frame and a right view frame is provided, the method including: determining a texture saliency value for a first block in the left view frame by intra prediction (1101); determining a motion saliency value for the first block by motion estimation (1102); determining a disparity saliency value between the first block and a corresponding second block in the right view frame (1103); determining a quantization parameter based on the disparity saliency value, the texture saliency value, and the motion saliency value (1104); and performing quantization of the first block in accordance with the quantization parameter (1105).
    Type: Grant
    Filed: January 18, 2016
    Date of Patent: May 5, 2020
    Assignee: SHANGHAITECH UNIVERSITY
    Inventors: Jingyi Yu, Yi Ma
  • Patent number: 10645281
    Abstract: A method for generating high resolution multi-spectral light fields is disclosed. The method may include capturing a multi-perspective spectral image which includes a plurality of sub-view images; aligning and warping the sub-view images to obtain low resolution multi-spectral light fields; obtaining a high resolution dictionary and a low resolution dictionary; obtaining a sparse representation based on the low resolution multi-spectral light fields and the low resolution dictionary; and generating high resolution multi-spectral light fields with the sparse representation and the high resolution directory. Each sub-view image is captured with a different perspective and a different spectral range. The multi-perspective spectral image is obtain with one exposure.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: May 5, 2020
    Assignee: ShanghaiTech University
    Inventor: Jingyi Yu
  • Patent number: 10641658
    Abstract: A method for generating hyperspectral data-cubes based on a plurality of hyperspectral light field (H-LF) images is disclosed. Each H-LF image may have a different view and a different spectral band. The method may include calculating a magnitude histogram, a direction histogram, and an overlapping histogram of oriented gradient for a plurality of pixels; developing a spectral-invariant feature descriptor by combining the magnitude histogram, the direction histogram, and the overlapping histogram of oriented gradient; obtaining a correspondence cost of the H-LF images based on the spectral-invariable feature descriptor; performing H-LF stereo matching on the H-LF images to obtain a disparity map of a reference view; and generating hyperspectral data-cubes by using the disparity map of the reference view. A bin in the overlapping histogram of oriented gradient may comprise overlapping ranges of directions.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: May 5, 2020
    Assignee: ShanghaiTech University
    Inventor: Jingyi Yu
  • Patent number: 10645368
    Abstract: A method and apparatus for extracting depth information from a focal stack is disclosed. The method may include processing the focal stack through a focus convolutional neural network (Focus-Net) to generate a plurality of feature maps, stacking the plurality of feature maps together, and fusing the plurality of feature maps by a plurality of first convolutional layers to obtain a depth image. The Focus-Net includes a plurality of branches, and each branch includes a downsampling convolutional layer having a different stride for downsampling the focal stack and a deconvolutional layer for upsampling the focal stack.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: May 5, 2020
    Assignee: SHANGHAITECH UNIVERSITY
    Inventor: Jingyi Yu
  • Patent number: 10636121
    Abstract: A method of calibrating a camera array comprising a plurality of cameras configured to capture a plurality of images to generate a panorama, wherein the relative positions among the plurality of cameras are constant, the method comprising: moving the camera array from a first position to a second position; measuring a homogeneous transformation matrix of a reference point on the camera array between the first position and the second position; capturing images at the first position and the second position by a first camera and a second camera on the camera array; and determining a homogenous transformation matrix between the first camera and the second camera based on the images captured by the first camera and the second camera at the first position and the second position.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: April 28, 2020
    Assignee: SHANGHAITECH UNIVERSITY
    Inventors: Jingyi Yu, Yi Ma
  • Publication number: 20200120270
    Abstract: A method for generating high resolution multi-spectral light fields is disclosed. The method may include capturing a multi-perspective spectral image which includes a plurality of sub-view images; aligning and warping the sub-view images to obtain low resolution multi-spectral light fields; obtaining a high resolution dictionary and a low resolution dictionary; obtaining a sparse representation based on the low resolution multi-spectral light fields and the low resolution dictionary; and generating high resolution multi-spectral light fields with the sparse representation and the high resolution directory. Each sub-view image is captured with a different perspective and a different spectral range. The multi-perspective spectral image is obtain with one exposure.
    Type: Application
    Filed: November 8, 2019
    Publication date: April 16, 2020
    Inventor: Jingyi YU
  • Publication number: 20200090354
    Abstract: A method and system for capturing images and depth for image-based rendering. Capture is made by a multi-camera configuration using a combination of image and depth cameras. Rendering utilizes scene geometry derived from image and depth data.
    Type: Application
    Filed: June 11, 2018
    Publication date: March 19, 2020
    Inventor: Jingyi YU
  • Publication number: 20200074658
    Abstract: A method of generating a three-dimensional model of an object is disclosed. The method may use a light field camera to capture a plurality of light field images at a plurality of viewpoints. The method may include capturing a first light field image at a first viewpoint; capturing a second light field image at the second viewpoint; estimating a rotation and a translation of a light field from the first viewpoint to the second viewpoint; obtaining a disparity map from each of the plurality of light field image; and computing a three-dimensional point cloud by optimizing the rotation and translation of the light field and the disparity map. The first light field image may include a first plurality of subaperture images and the second light field image may include a second plurality of subaperture images.
    Type: Application
    Filed: November 6, 2019
    Publication date: March 5, 2020
    Inventor: Jingyi YU
  • Patent number: 10546395
    Abstract: Light representing a scene is directed through a lens module coupled to an imaging sensor. The lens module includes: first and second cylindrical lenses positioned along an optical axis of the imaging sensor, and first and second slit-shaped apertures disposed on the respective first and second cylindrical lenses. A cylindrical axis of the second cylindrical lens is arranged at an angle away from parallel with respect to a cylindrical axis of the first cylindrical lens. The light directed through the lens module is captured by the imaging sensor to form at least one multi-perspective image. The at least one multi-perspective image is processed to determine a reconstruction characteristic of the scene.
    Type: Grant
    Filed: October 3, 2014
    Date of Patent: January 28, 2020
    Assignee: University of Delaware
    Inventors: Jingyi Yu, Jinwei Ye, Yu Ji
  • Patent number: 10489886
    Abstract: A method of generating a stereoscopic panorama is provided, the method comprising: processing a first right image, a second right image, a first left image, and a second left image to derive a right homography between the first right image and the second right image and a left homography between the first left image and the second left image; stitching the first right image with the second right image to generate a right panorama, the first left image with the second left image to generate a left panorama wherein the right homography is consistent with the left homography; and generating a stereo panorama using the right panorama and the left panorama.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: November 26, 2019
    Assignee: ShanghaiTech University
    Inventors: Jingyi Yu, Yi Ma
  • Patent number: 10397545
    Abstract: Methods and systems for generating three-dimensional (3D) images, 3D light field (LF) cameras and 3D photographs are provided. Light representing a scene is directed through a lens module coupled to an imaging sensor. The lens module includes: a surface having a slit-shaped aperture and a cylindrical lens array positioned along an optical axis of the imaging sensor. A longitudinal direction of the slit-shaped aperture is arranged orthogonal to a cylindrical axis of the cylindrical lens array. The light directed through the lens module is captured by the imaging sensor to form a 3D LF image. A 3D photograph includes a 3D LF printed image of the scene and a cylindrical lens array disposed on the printed image, such that the combination of 3D LF printed image and the cylindrical lens array forms a 3D stereoscopic image.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: August 27, 2019
    Assignee: University of Deleware
    Inventors: Jingyi Yu, Xinqing Guo, Zhan Yu