Patents by Inventor Jingyi Yu

Jingyi Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190035055
    Abstract: A method of generating a stereoscopic panorama is provided, the method comprising: processing a first right image, a second right image, a first left image, and a second left image to derive a right homography between the first right image and the second right image and a left homography between the first left image and the second left image; stitching the first right image with the second right image to generate a right panorama, the first left image with the second left image to generate a left panorama wherein the right homography is consistent with the left homography; and generating a stereo panorama using the right panorama and the left panorama.
    Type: Application
    Filed: January 13, 2016
    Publication date: January 31, 2019
    Inventors: Jingyi YU, Yi MA
  • Publication number: 20190028707
    Abstract: A method of compressing a stereoscopic video including a left view frame and a right view frame is provided, the method including: determining a texture saliency value for a first block in the left view frame by intra prediction (1101); determining a motion saliency value for the first block by motion estimation (1102); determining a disparity saliency value between the first block and a corresponding second block in the right view frame (1103); determining a quantization parameter based on the disparity saliency value, the texture saliency value, and the motion saliency value (1104); and performing quantization of the first block in accordance with the quantization parameter (1105).
    Type: Application
    Filed: January 18, 2016
    Publication date: January 24, 2019
    Inventors: Jingyi YU, Yi MA
  • Publication number: 20190028693
    Abstract: A method of calibrating a camera array comprising a plurality of cameras configured to capture a plurality of images to generate a panorama, wherein the relative positions among the plurality of cameras are constant, the method comprising: moving the camera array from a first position to a second position; measuring a homogeneous transformation matrix of a reference point on the camera array between the first position and the second position; capturing images at the first position and the second position by a first camera and a second camera on the camera array; and determining a homogenous transformation matrix between the first camera and the second camera based on the images captured by the first camera and the second camera at the first position and the second position.
    Type: Application
    Filed: January 12, 2016
    Publication date: January 24, 2019
    Inventors: Jingyi YU, Yi MA
  • Publication number: 20180293744
    Abstract: A method and system for capturing images and depth for image-based rendering. Capture is made by a multi-camera configuration using a combination of image and depth cameras. Rendering utilizes scene geometry derived from image and depth data.
    Type: Application
    Filed: June 11, 2018
    Publication date: October 11, 2018
    Inventor: Jingyi YU
  • Publication number: 20180293774
    Abstract: A method and system of using multiple image cameras or multiple image and depth cameras to capture a target object. Geometry and texture are reconstructed using captured images and depth images. New images are rendered using geometry based rendering methods or image based rendering methods.
    Type: Application
    Filed: June 11, 2018
    Publication date: October 11, 2018
    Inventor: Jingyi YU
  • Publication number: 20160330432
    Abstract: Methods and systems for generating three-dimensional (3D) images, 3D light field (LF) cameras and 3D photographs are provided. Light representing a scene is directed through a lens module coupled to an imaging sensor. The lens module includes: a surface having a slit-shaped aperture and a cylindrical lens array positioned along an optical axis of the imaging sensor. A longitudinal direction of the slit-shaped aperture is arranged orthogonal to a cylindrical axis of the cylindrical lens array. The light directed through the lens module is captured by the imaging sensor to form a 3D LF image. A 3D photograph includes a 3D LF printed image of the scene and a cylindrical lens array disposed on the printed image, such that the combination of 3D LF printed image and the cylindrical lens array forms a 3D stereoscopic image.
    Type: Application
    Filed: December 23, 2014
    Publication date: November 10, 2016
    Applicant: University of Delaware
    Inventors: Jingyi Yu, Xinqing Guo, Zhan Yu
  • Publication number: 20160253824
    Abstract: Multi-perspective cameras, systems and methods for reconstructing a scene are provided. Light representing a scene is directed through a lens module coupled to an imaging sensor. The lens module includes: first and second cylindrical lenses positioned along an optical axis of the imaging sensor, and first and second slit-shaped apertures disposed on the respective first and second cylindrical lenses. A cylindrical axis of the second cylindrical lens is arranged at an angle away from parallel with respect to a cylindrical axis of the first cylindrical lens. The light directed through the lens module is captured by the imaging sensor to form at least one multi-perspective image. The at least one multi-perspective image is processed to determine a reconstruction characteristic of the scene.
    Type: Application
    Filed: October 3, 2014
    Publication date: September 1, 2016
    Applicant: University of Delaware
    Inventors: Jingyi YU, Jinwei YE, Yu JI
  • Publication number: 20150039383
    Abstract: A computer-implemented method for determining a desired product for performing a workflow is provided. The method includes receiving a selection of a desired workflow from a user. The desired workflow includes a set of steps including at least one step. The method includes receiving a selection of a step from the set of steps from the user, then displaying a plurality of product categories associated with the selected step. The method further includes receiving a selection of a product category of the plurality of product categories from the user, then displaying a plurality of products associated with the selected product category, and a plurality of associated metrics associated with each of the products of the plurality of products. At least one of the associated metrics is determined from input from a plurality of users.
    Type: Application
    Filed: January 25, 2013
    Publication date: February 5, 2015
    Applicant: LIFE TECHNOLOGIES CORPORATION
    Inventors: Darren Thierry, Jingyi Yu, Karen Spinks
  • Patent number: 7218792
    Abstract: A method generates a stylized image. First, a set of images is acquired of a scene. Each image is acquired under a different lighting condition. Silhouette edges are detected in the set of images. Texture regions are identified in the set of images according to the silhouette edges, and an output image is generated from a combination of the set of images wherein the silhouette edges and the texture regions are altered so as to enhance or de-emphasize selected details.
    Type: Grant
    Filed: March 19, 2003
    Date of Patent: May 15, 2007
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Ramesh Raskar, Jingyi Yu
  • Patent number: 7206449
    Abstract: A method detects silhouette edges in images. An ambient image is acquired of a scene with ambient light. A set of illuminated images is also acquired of the scene. Each illuminated image is acquired with a different light source illuminating the scene. The ambient image is combined with the set of illuminated to detect cast shadows, and silhouette edge pixels are located from the cast shadows.
    Type: Grant
    Filed: March 19, 2003
    Date of Patent: April 17, 2007
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Ramesh Raskar, Jingyi Yu
  • Patent number: 7102638
    Abstract: A method generates an image with de-emphasized textures. Each pixel in the image is classified as either a silhouette edge pixel, a texture edge pixels, or a featureless pixel. A mask image M(x, y) is generated, wherein an intensity of a given pixel (x, y) in the mask image M(x, y) is zero if the pixel (x, y) is classified as the texture edge pixel, is d(x, y) if the pixel (x, y) is classified as the featureless pixel, and is one if the pixel (x, y) is classified as the silhouette edge pixel. An intensity gradient ?I(x, y) is determined in the masked image, and the intensity gradients in the masked image are integrated according to G(x, y)=?I(x, y). M(x, y). Then, an output image I? is generated by minimizing |?I??G|, and normalizing the intensities in the output image I?.
    Type: Grant
    Filed: March 19, 2003
    Date of Patent: September 5, 2006
    Assignee: Mitsubishi Eletric Research Labs, Inc.
    Inventors: Ramesh Raskar, Jingyi Yu, Adrian Ilie
  • Publication number: 20040184677
    Abstract: A method detects silhouette edges in images. An ambient image is acquired of a scene with ambient light. A set of illuminated images is also acquired of the scene. Each illuminated image is acquired with a different light source illuminating the scene. The ambient image is combined with the set of illuminated to detect cast shadows, and silhouette edge pixels are located from the cast shadows.
    Type: Application
    Filed: March 19, 2003
    Publication date: September 23, 2004
    Inventors: Ramesh Raskar, Jingyi Yu
  • Publication number: 20040183925
    Abstract: A method generates a stylized image. First, a set of images is acquired of a scene. Each image is acquired under a different lighting condition. Silhouette edges are detected in the set of images. Texture regions are identified in the set of images according to the silhouette edges, and an output image is generated from a combination of the set of images wherein the silhouette edges and the texture regions are altered so as to enhance or de-emphasize selected details.
    Type: Application
    Filed: March 19, 2003
    Publication date: September 23, 2004
    Inventors: Ramesh Raskar, Jingyi Yu
  • Publication number: 20040183812
    Abstract: A method generates an image with de-emphasized textures. Each pixel in the image is classified as either a silhouette edge pixel, a texture edge pixels, or a featureless pixel. A mask image M(x, y) is generated, wherein an intensity of a given pixel (x, y) in the mask image M(x, y) is zero if the pixel (x, y) is classified as the texture edge pixel, is d(x, y) if the pixel (x, y) is classified as the featureless pixel, and is one if the pixel (x, y) is classified as the silhouette edge pixel. An intensity gradient ∇I(x, y) is determined in the masked image, and the intensity gradients in the masked image are integrated according to G(x, y)=∇I(x, y). M(x, y). Then, an output image I′ is generated by minimizing |∇I′−G|, and normalizing the intensities in the output image I′.
    Type: Application
    Filed: March 19, 2003
    Publication date: September 23, 2004
    Inventors: Ramesh Raskar, Jingyi Yu, Adrian Ilie