Patents by Inventor Jinwei Ye

Jinwei Ye has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230362346
    Abstract: A system and method are provided for reconstructing 3-D point cloud. A light source generates light that is received by a polarization field generator, which generates a polarization field that illuminates the target object being imaged such that each outgoing ray has a unique polarization state. A camera captures images of the illuminated target object and the captured images are received by a processor that: (1) performs a polarization field decoding algorithm that decodes the polarization field to obtain a set of incident rays; (2) performs a camera ray decoding algorithm to obtain a set of camera rays; (3) performs a ray-ray intersection algorithm that determines intersection points where the incident rays and the camera rays intersect; and (4) performs a 3-D reconstruction algorithm that uses the set of incident rays, the set of camera rays and the intersection points to reconstruct a 3-D point cloud of the target object.
    Type: Application
    Filed: April 24, 2023
    Publication date: November 9, 2023
    Inventors: Jinwei Ye, Jie Lu
  • Patent number: 11671580
    Abstract: A system and method are provided for reconstructing 3-D point cloud. A light source generates light that is received by a polarization field generator, which generates a polarization field that illuminates the target object being imaged such that each outgoing ray has a unique polarization state. A camera captures images of the illuminated target object and the captured images are received by a processor that: (1) performs a polarization field decoding algorithm that decodes the polarization field to obtain a set of incident rays; (2) performs a camera ray decoding algorithm to obtain a set of camera rays; (3) performs a ray-ray intersection algorithm that determines intersection points where the incident rays and the camera rays intersect; and (4) performs a 3-D reconstruction algorithm that uses the set of incident rays, the set of camera rays and the intersection points to reconstruct a 3-D point cloud of the target object.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: June 6, 2023
    Assignee: BOARD OF SUPERVISORS OF LOUISIANA STATE UNIVERSITY AND AGRICULTURAL AND MECHANICAL COLLEGE
    Inventors: Jinwei Ye, Jie Lu
  • Publication number: 20200366881
    Abstract: A system and method are provided for reconstructing 3-D point cloud. A light source generates light that is received by a polarization field generator, which generates a polarization field that illuminates the target object being imaged such that each outgoing ray has a unique polarization state. A camera captures images of the illuminated target object and the captured images are received by a processor that: (1) performs a polarization field decoding algorithm that decodes the polarization field to obtain a set of incident rays; (2) performs a camera ray decoding algorithm to obtain a set of camera rays; (3) performs a ray-ray intersection algorithm that determines intersection points where the incident rays and the camera rays intersect; and (4) performs a 3-D reconstruction algorithm that uses the set of incident rays, the set of camera rays and the intersection points to reconstruct a 3-D point cloud of the target object.
    Type: Application
    Filed: May 14, 2020
    Publication date: November 19, 2020
    Inventors: Jinwei Ye, Jie Lu
  • Patent number: 10546395
    Abstract: Light representing a scene is directed through a lens module coupled to an imaging sensor. The lens module includes: first and second cylindrical lenses positioned along an optical axis of the imaging sensor, and first and second slit-shaped apertures disposed on the respective first and second cylindrical lenses. A cylindrical axis of the second cylindrical lens is arranged at an angle away from parallel with respect to a cylindrical axis of the first cylindrical lens. The light directed through the lens module is captured by the imaging sensor to form at least one multi-perspective image. The at least one multi-perspective image is processed to determine a reconstruction characteristic of the scene.
    Type: Grant
    Filed: October 3, 2014
    Date of Patent: January 28, 2020
    Assignee: University of Delaware
    Inventors: Jingyi Yu, Jinwei Ye, Yu Ji
  • Patent number: 10168146
    Abstract: The shape of a specular object is measured by illumination of the object by a light field generated by two or more spaced apart layers controllable to display multiple patterns that are predetermined relative to a bounding volume within which the object is positioned. The patterns code a sparse subset of the multitude of light rays that can be generated by the layers to those that can actually reach the bounding volume. A process is described by which a sparse coding of the light rays can be derived.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: January 1, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Siu-Kei Tin, Jinwei Ye
  • Patent number: 9958259
    Abstract: A depth value of an object is measured. The object is illuminated with a luminaire comprising at least three or more pixel-layers including a first pixel-layer, a second pixel-layer and a third pixel-layer, each pixel-layer including a rectangular array of pixels. One or more images are captured of the object illuminated by the pixel-layers of the luminaire. The depth value of a point on the surface of the object is determined based on the one or more captured images. The spaced-apart pixel-layers of the luminaire are grouped into at least a front group and a back group, and the front group is separated from the back group by a distance that is relatively large as compared to a distance by which the spaced-apart pixel-layers within any one group are separated.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: May 1, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Siu-Kei Tin, Jinwei Ye
  • Patent number: 9959455
    Abstract: A system for facial recognition comprising at least one processor; at least one input operatively connected to the at least one processor; a database configured to store three-dimensional facial image data comprising facial feature coordinates in a predetermined common plane; the at least one processor configured to locate three-dimensional facial features in the image of the subject, estimate three-dimensional facial feature location coordinates in the image of the subject, obtain the three-dimensional facial feature location coordinates and orientation parameters in a coordinate system in which the facial features are located in the predetermined common plane; and compare the location of the facial feature coordinates of the subject to images of people in the database; whereby recognition, comparison and/or likeness of the facial images is determined by comparing the predetermined common plane facial feature coordinates of the subject to images in the database. A method is also disclosed.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: May 1, 2018
    Assignee: The United States of America as represented by the Secretary of the Army
    Inventors: Shiqiong Susan Young, Jinwei Ye
  • Publication number: 20180005018
    Abstract: A system for facial recognition comprising at least one processor; at least one input operatively connected to the at least one processor; a database configured to store three-dimensional facial image data comprising facial feature coordinates in a predetermined common plane; the at least one processor configured to locate three-dimensional facial features in the image of the subject, estimate three-dimensional facial feature location coordinates in the image of the subject, obtain the three-dimensional facial feature location coordinates and orientation parameters in a coordinate system in which the facial features are located in the predetermined common plane; and compare the location of the facial feature coordinates of the subject to images of people in the database; whereby recognition, comparison and/or likeness of the facial images is determined by comparing the predetermined common plane facial feature coordinates of the subject to images in the database. A method is also disclosed.
    Type: Application
    Filed: July 11, 2017
    Publication date: January 4, 2018
    Applicant: U.S. Army Research Laboratory ATTN: RDRL-LOC-I
    Inventors: Shiqiong Susan Young, Jinwei Ye
  • Publication number: 20170199028
    Abstract: A depth value of an object is measured. The object is illuminated with a luminaire comprising at least three or more pixel-layers including a first pixel-layer, a second pixel-layer and a third pixel-layer, each pixel-layer including a rectangular array of pixels. One or more images are captured of the object illuminated by the pixel-layers of the luminaire. The depth value of a point on the surface of the object is determined based on the one or more captured images. The spaced-apart pixel-layers of the luminaire are grouped into at least a front group and a back group, and the front group is separated from the back group by a distance that is relatively large as compared to a distance by which the spaced-apart pixel-layers within any one group are separated.
    Type: Application
    Filed: January 12, 2016
    Publication date: July 13, 2017
    Inventors: Siu-Kei Tin, Jinwei Ye
  • Publication number: 20170178390
    Abstract: Devices, systems, and methods obtain two sets of images of an object, each of which was captured from a respective viewpoint; identify pixel regions in the two sets of images that show reflections from a light-modulating device that were reflected by a surface of the object; calculate respective surface normals for points on the surface in the pixel regions; calculate, for each viewpoint, respective unscaled surface coordinates of the points based on the respective surface normals; calculate, for each viewpoint, a respective initial scale factor based on the respective surface normals and on decoded light-modulating-device-pixel indices; calculate, for each viewpoint, scaled surface coordinates of the points based on the respective initial scale factor and the respective unscaled surface coordinates of the viewpoint; and calculate, for each viewpoint, a respective refined scale factor by minimizing discrepancies among the scaled surface coordinates of the points on the surface.
    Type: Application
    Filed: December 7, 2016
    Publication date: June 22, 2017
    Inventors: Jinwei Ye, Siu-Kei Tin, Can Chen, Mahdi Nezamabadi
  • Publication number: 20170131091
    Abstract: Measuring a surface geometry of an object involves capturing one or more images of the object illuminated by a light field produced by one or more luminaires having multiple pixel-layers with overlapping fields of illumination. Each pixel-layer simultaneously and in synchronization with each other displays multiple coded patterns such that combinations of the multiple coded patterns uniquely identify directions of light rays originating from the multiple pixel-layers. A unique incident light ray direction for each pixel of the captured one or more images is determined by decoding the combinations of the multiple coded patterns. The surface geometry of the object is recovered using the determined unique incident light ray direction for each pixel of the one or more captured images.
    Type: Application
    Filed: November 10, 2015
    Publication date: May 11, 2017
    Inventors: Siu-Kei Tin, Jinwei Ye
  • Publication number: 20160349046
    Abstract: The shape of a specular object is measured by illumination of the object by a light field generated by two or more spaced apart layers controllable to display multiple patterns that are predetermined relative to a bounding volume within which the object is positioned. The patterns code a sparse subset of the multitude of light rays that can be generated by the layers to those that can actually reach the bounding volume. A process is described by which a sparse coding of the light rays can be derived.
    Type: Application
    Filed: May 10, 2016
    Publication date: December 1, 2016
    Inventors: SIU-KEI TIN, JINWEI YE
  • Publication number: 20160253824
    Abstract: Multi-perspective cameras, systems and methods for reconstructing a scene are provided. Light representing a scene is directed through a lens module coupled to an imaging sensor. The lens module includes: first and second cylindrical lenses positioned along an optical axis of the imaging sensor, and first and second slit-shaped apertures disposed on the respective first and second cylindrical lenses. A cylindrical axis of the second cylindrical lens is arranged at an angle away from parallel with respect to a cylindrical axis of the first cylindrical lens. The light directed through the lens module is captured by the imaging sensor to form at least one multi-perspective image. The at least one multi-perspective image is processed to determine a reconstruction characteristic of the scene.
    Type: Application
    Filed: October 3, 2014
    Publication date: September 1, 2016
    Applicant: University of Delaware
    Inventors: Jingyi YU, Jinwei YE, Yu JI
  • Publication number: 20160241797
    Abstract: Systems, methods, and devices for generating high-resolution multispectral light-field images are described. The systems and devices a main lens include a microlens array, a multispectral-filter array that comprises spectral filters that filter light in different wavelengths, and a sensor that is configured to detect incident light. Also, the main lens, the microlens array, the multispectral-filter array, and the light sensor are disposed such that light from a scene passes through the main lens, the microlens array, and the multispectral-filter array and strikes a sensing surface of the sensor. Additionally, the multispectral-filter array is disposed so as to encode, in the light that strikes the sensing surface, a plane of the microlens array on the sensing surface of the sensor.
    Type: Application
    Filed: December 8, 2015
    Publication date: August 18, 2016
    Inventor: Jinwei Ye