Patents by Inventor Jingyi Yu

Jingyi Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104822
    Abstract: An image rendering system comprising a preprocessing unit coupled to a feature extract unit and a color rendering unit over a data bus. The preprocessing unit generates vector representations of spatial coordinates of sample points along camera rays corresponding to pixels of an image to be rendered. The feature extract unit generates a feature map of the image based on the vector representations, color and intensity values of the sample point through a first machine learning model. The color rendering unit renders the image based on the feature map through a second machine learning model. The first machine learning model is different from the second machine learning model.
    Type: Application
    Filed: December 7, 2023
    Publication date: March 28, 2024
    Applicant: SHANGHAITECH UNIVERSITY
    Inventors: Chaolin RAO, Minye WU, Xin LOU, Pingqiang ZHOU, Jingyi YU
  • Patent number: 11921385
    Abstract: An array substrate of a display device includes a pixel electrode layer on a substrate, which includes active pixel electrodes in an active display region; outermost active pixel electrodes include a first active pixel electrode including a first pixel electrode edge and a second pixel electrode edge; in a first direction, the first pixel electrode edge is between the second pixel electrode edge and a frame region. One of the array substrate and an opposite substrate of the display device includes a common electrode layer including a first extended common electrode which includes a first extended portion extending beyond the first active pixel electrode; a first extended portion edge of the first extended portion and a first substrate edge of the substrate respectively extend in a second direction; in the first direction, the first extended portion edge is located between the first substrate edge and the first pixel electrode edge.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: March 5, 2024
    Assignees: ORDOS YUANSHENG OPTOELECTRONICS CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Jingwei Hou, Jingyi Xu, Yanwei Ren, Wenlong Zhang, Yanan Yu, Lei Jia, Yanhao Sun, Guolei Zhi
  • Patent number: 11880935
    Abstract: An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: January 23, 2024
    Assignee: SHANGHAITECH UNIVERSITY
    Inventors: Minye Wu, Jingyi Yu
  • Patent number: 11880964
    Abstract: A method of processing light field images for separating a transmitted layer from a reflection layer. The method comprises capturing a plurality of views at a plurality of viewpoints with different polarization angles; obtaining an initial disparity estimation for a first view using SIFT-flow, and warping the first view to a reference view; optimizing an objective function comprising a transmitted layer and a secondary layer using an Augmented Lagrange Multiplier (ALM) with Alternating Direction Minimizing (ADM) strategy; updating the disparity estimation for the first view; repeating the steps of optimizing the objective function and updating the disparity estimation until the change in the objective function between two consecutive iterations is below a threshold; and separating the transmitted layer and the secondary layer using the disparity estimation for the first view.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: January 23, 2024
    Assignee: SHANGHAITECH UNIVERSITY
    Inventors: Minye Wu, Zhiru Shi, Jingyi Yu
  • Publication number: 20240013479
    Abstract: A computer-implemented method includes encoding a radiance field of an object onto a machine learning model; conducting, based on a set of training images of the object, a training process on the machine learning model to obtain a trained machine learning model, wherein the training process includes a first training process using a plurality of first test sample points followed by a second training process using a plurality of second test sample points located within a threshold distance from a surface region of the object; obtaining target view parameters indicating a view direction of the object; obtaining a plurality of rays associated with a target image of the object; obtaining render sample points on the plurality of rays associated with the target image; and rendering, by inputting the render sample points to the trained machine learning model, colors associated with the pixels of the target image.
    Type: Application
    Filed: September 19, 2023
    Publication date: January 11, 2024
    Applicant: SHANGHAITECH UNIVERSITY
    Inventors: Minye WU, Chaolin RAO, Xin LOU, Pingqiang ZHOU, Jingyi YU
  • Patent number: 11861840
    Abstract: According to some embodiments, an imaging processing method for extracting a plurality of planar surfaces from a depth map includes computing a depth change indication map (DCI) from a depth map in accordance with a smoothness threshold. The imaging processing method further includes recursively extracting a plurality of planar region from the depth map, wherein the size of each planar region is dynamically adjusted according to the DCI. The imaging processing method further includes clustering the extracted planar regions into a plurality of groups in accordance with a distance function; and growing each group to generate pixel-wise segmentation results and inlier points statistics simultaneously.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: January 2, 2024
    Assignee: SHANGHAITECH UNIVERSITY
    Inventors: Ziran Xing, Zhiru Shi, Yi Ma, Jingyi Yu
  • Publication number: 20230360372
    Abstract: Systems, methods, and non-transitory computer-readable media are configured to obtain a set of content items to train a neural radiance field-based (NeRF-based) machine learning model for object recognition. Depth maps of objects depicted in the set of content items can be determined. A first set of training data comprising reconstructed content items depicting only the objects can be generated based on the depth maps. A second set of training data comprising one or more optimal training paths associated with the set of content items can be generated based on the depth maps. The one or more optimal training paths are generated based at least in part on a dissimilarity matrix associated with the set of content items. The NeRF-based machine learning model can be trained based on the first set of training data and the second set of training data.
    Type: Application
    Filed: July 19, 2023
    Publication date: November 9, 2023
    Applicant: SHANGHAITECH UNIVERSITY
    Inventors: Fuqiang ZHAO, Minye WU, Lan XU, Jingyi YU
  • Publication number: 20230273318
    Abstract: Described herein are systems and methods for training machine learning models to generate three-dimensional (3D) motions based on light detection and ranging (LiDAR) point clouds. In various embodiments, a computing system can encode a machine learning model representing an object in a scene. The computing system can train the machine learning model using a dataset comprising synchronous LiDAR point clouds captured by monocular LiDAR sensors and ground-truth three-dimensional motions obtained from IMU devices. The machine learning model can be configured to generate a three-dimensional motion of the object based on an input of a plurality of point cloud frames captured by a monocular LiDAR sensor.
    Type: Application
    Filed: August 9, 2022
    Publication date: August 31, 2023
    Inventors: Cheng WANG, Jialian LI, Lan XU, Chenglu WEN, Jingyi YU
  • Publication number: 20230273315
    Abstract: Described herein are systems and methods of capturing motions of humans in a scene. A plurality of IMU devices and a LiDAR sensor are mounted on a human. IMU data is captured by the IMU devices and LiDAR data is captured by the LiDAR sensor. Motions of the human are estimated based on the IMU data and the LiDAR data. A three-dimensional scene map is built based on the LiDAR data. An optimization is performed to obtain optimized motions of the human and optimized scene map.
    Type: Application
    Filed: August 9, 2022
    Publication date: August 31, 2023
    Inventors: Chenglu WEN, Yudi Dai, Lan Xu, Cheng Wang, Jingyi Yu
  • Patent number: 11727628
    Abstract: A method of rendering an object is provided. The method comprises: encoding a feature vector to each point in a point cloud for an object, wherein the feature vector comprises an alpha matte; projecting each point in the point cloud and the corresponding feature vector to a target view to compute a feature map; and using a neural rendering network to decode the feature map into a RGB image and the alpha matte and to update the feature vector.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: August 15, 2023
    Assignee: ShanghaiTech University
    Inventors: Cen Wang, Jingyi Yu
  • Publication number: 20230071559
    Abstract: A method of rendering an object is provided. The method comprises: encoding a feature vector to each point in a point cloud for an object, wherein the feature vector comprises an alpha matte; projecting each point in the point cloud and the corresponding feature vector to a target view to compute a feature map; and using a neural rendering network to decode the feature map into a RGB image and the alpha matte and to update the feature vector.
    Type: Application
    Filed: November 4, 2022
    Publication date: March 9, 2023
    Inventors: Cen WANG, Jingyi Yu
  • Publication number: 20230027234
    Abstract: An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.
    Type: Application
    Filed: September 23, 2022
    Publication date: January 26, 2023
    Inventors: Minye WU, Jingyi YU
  • Patent number: 11528427
    Abstract: An image capturing system includes a center point location, and a circular light source ring centered on the center point location. Light sources are on the circular light source ring, and each emit light in one of a number of spectral bandwidths. The image capturing system also includes circular camera rings, where each circular camera ring is centered on the center point location, where each circular camera ring includes camera locations which are equally spaced apart. The image capturing system also includes one or more cameras configured to capture images of an object from each of the camera locations of each particular circular camera ring, where the images captured from the camera locations of each particular circular camera ring are captured in one of the different spectral bandwidths, and where the object is illuminated by the light sources of the light source ring while each of the images are captured.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: December 13, 2022
    Inventors: Yu Ji, Mingyuan Zhou, Jingyi Yu
  • Patent number: 11410459
    Abstract: A method of detecting and recognizing faces using a light field camera array is provided. The method includes capturing multi-view color images using the light field camera array; obtaining a depth map; conducting light field rendering using a weight function comprising a depth component and a sematic component, where the weight function assigns a ray in the light field with a weight; and detecting and recognizing a face.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: August 9, 2022
    Assignee: ShanghaiTech University
    Inventors: Zhiru Shi, Minye Wu, Wenguang Ma, Jingyi Yu
  • Patent number: 11354840
    Abstract: A method and system of using multiple image cameras or multiple image and depth cameras to capture a target object. Geometry and texture are reconstructed using captured images and depth images. New images are rendered using geometry based rendering methods or image based rendering methods.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: June 7, 2022
    Inventor: Jingyi Yu
  • Publication number: 20220174224
    Abstract: An image capturing system includes a center point location, and a circular light source ring centered on the center point location. Light sources are on the circular light source ring, and each emit light in one of a number of spectral bandwidths. The image capturing system also includes circular camera rings, where each circular camera ring is centered on the center point location, where each circular camera ring includes camera locations which are equally spaced apart. The image capturing system also includes one or more cameras configured to capture images of an object from each of the camera locations of each particular circular camera ring, where the images captured from the camera locations of each particular circular camera ring are captured in one of the different spectral bandwidths, and where the object is illuminated by the light sources of the light source ring while each of the images are captured.
    Type: Application
    Filed: December 22, 2020
    Publication date: June 2, 2022
    Inventors: Yu JI, Mingyuan ZHOU, Jingyi YU
  • Publication number: 20210241462
    Abstract: According to some embodiments, an imaging processing method for extracting a plurality of planar surfaces from a depth map includes computing a depth change indication map (DCI) from a depth map in accordance with a smoothness threshold. The imaging processing method further includes recursively extracting a plurality of planar region from the depth map, wherein the size of each planar region is dynamically adjusted according to the DCI. The imaging processing method further includes clustering the extracted planar regions into a plurality of groups in accordance with a distance function; and growing each group to generate pixel-wise segmentation results and inlier points statistics simultaneously.
    Type: Application
    Filed: March 31, 2021
    Publication date: August 5, 2021
    Inventors: Ziran XING, Zhiru SHI, Yi MA, Jingyi YU
  • Publication number: 20210082096
    Abstract: A method of processing light field images for separating a transmitted layer from a reflection layer. The method comprises capturing a plurality of views at a plurality of viewpoints with different polarization angles; obtaining an initial disparity estimation for a first view using SIFT-flow, and warping the first view to a reference view; optimizing an objective function comprising a transmitted layer and a secondary layer using an Augmented Lagrange Multiplier (ALM) with Alternating Direction Minimizing (ADM) strategy; updating the disparity estimation for the first view; repeating the steps of optimizing the objective function and updating the disparity estimation until the change in the objective function between two consecutive iterations is below a threshold; and separating the transmitted layer and the secondary layer using the disparity estimation for the first view.
    Type: Application
    Filed: October 19, 2020
    Publication date: March 18, 2021
    Inventors: Minye WU, Zhiru SHI, Jingyi YU
  • Patent number: 10909752
    Abstract: The present invention relates to an all-around spherical light field rendering method, comprising: a preparation step, i.e., preparing to input and load related files; pre-computing a latticed depth map of positions of reference cameras which are densely covered on a sphere; moving a rendering camera, enabling the moving range of the rendering camera to be the surface of the sphere, and calculating and identifying the reference cameras surrounding the rendering camera around the rendering camera; performing back projection on pixels of the rendering camera, and performing depth test with the four reference cameras; and interpolating the reference cameras passing through the depth test, thereby obtaining a finally rendered pixel value. By means of the present invention, the rendering results can be rapidly seen in real time; an object can be observed from any angle on the spherical surface, and a real immersion feeling can be felt.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: February 2, 2021
    Assignee: PLEX-VR DIGITAL TECHNOLOGY (SHANGHAI) CO., LTD.
    Inventors: Jingyi Yu, Huangjie Yu
  • Publication number: 20200380770
    Abstract: The present invention relates to an all-around spherical light field rendering method, comprising: a preparation step, i.e., preparing to input and load related files; pre-computing a latticed depth map of positions of reference cameras which are densely covered on a sphere; moving a rendering camera, enabling the moving range of the rendering camera to be the surface of the sphere, and calculating and identifying the reference cameras surrounding the rendering camera around the rendering camera; performing back projection on pixels of the rendering camera, and performing depth test with the four reference cameras; and interpolating the reference cameras passing through the depth test, thereby obtaining a finally rendered pixel value. By means of the present invention, the rendering results can be rapidly seen in real time; an object can be observed from any angle on the spherical surface, and a real immersion feeling can be felt.
    Type: Application
    Filed: July 16, 2018
    Publication date: December 3, 2020
    Inventors: Jingyi Yu, Huangjie Yu