Patents by Inventor Yicheng Wu

Yicheng Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12133070
    Abstract: The invention relates to the technical field of battery charging and swap, and in particular to an off-line battery swap method, a battery charging and swap station, a vehicle with a battery to be swapped, and a readable storage medium. The invention is intended to solve the problem that battery swap cannot be performed at a battery charging and swap station when a vehicle is not connected to a network or a cloud server has a fault.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: October 29, 2024
    Assignee: NIO TECHNOLOGY (ANHUI) CO., LTD
    Inventor: Yicheng Wu
  • Publication number: 20240355239
    Abstract: An under-screen camera is provided. A camera is positioned behind a see-through display screen and positioned to capture scene image data of objects in front of the display screen. The camera captures scene image data of a real-world scene including a user. The scene image data is processed to remove artifacts in the scene image data created by capturing the scene image data through the see-through display screen such as blur, noise, backscatter, wiring effect, and the like.
    Type: Application
    Filed: April 17, 2024
    Publication date: October 24, 2024
    Inventors: Shree K. Nayar, Gurunandan Krishnan Gorumkonda, Jian Wang, Bing Zhou, Sizhuo Ma, Karl Bayer, Yicheng Wu
  • Patent number: 12112427
    Abstract: Images of a scene are received. The images represent viewpoints corresponding to the scene. A pixel map of the scene is computed based on the plurality of images. Multi-plane image (MPI) layers from the pixel map are extracted in real-time. The MPI layers are aggregated. The scene is rendered from a novel viewpoint based on the aggregated MPI layers.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: October 8, 2024
    Assignee: SNAP INC.
    Inventors: Numair Khalil Ullah Khan, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
  • Publication number: 20240320808
    Abstract: A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.
    Type: Application
    Filed: June 5, 2024
    Publication date: September 26, 2024
    Inventors: Yicheng Wu, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Jonathan T. Barron
  • Publication number: 20240288696
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Application
    Filed: May 2, 2024
    Publication date: August 29, 2024
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Patent number: 12073578
    Abstract: A method for a passive single-viewpoint 3D imaging system comprises capturing an image from a camera having one or more phase masks. The method further includes using a reconstruction algorithm, for estimation of a 3D or depth image.
    Type: Grant
    Filed: April 26, 2023
    Date of Patent: August 27, 2024
    Assignees: William Marsh Rice University, Carnegie Mellon University
    Inventors: Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin C. Sankaranarayanan, Ashok Veeraraghavan
  • Patent number: 12033309
    Abstract: A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: July 9, 2024
    Assignee: Google LLC
    Inventors: Yicheng Wu, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Jonathan T. Barron
  • Patent number: 12001024
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Grant
    Filed: April 13, 2023
    Date of Patent: June 4, 2024
    Assignee: Snap Inc.
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Publication number: 20240126084
    Abstract: An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
    Type: Application
    Filed: April 13, 2023
    Publication date: April 18, 2024
    Inventors: Jian Wang, Sizhuo Ma, Brevin Tilmon, Yicheng Wu, Gurunandan Krishnan Gorumkonda, Ramzi Zahreddine, Georgios Evangelidis
  • Publication number: 20230410341
    Abstract: A method for a passive single-viewpoint 3D imaging system comprises capturing an image from a camera having one or more phase masks. The method further includes using a reconstruction algorithm, for estimation of a 3D or depth image.
    Type: Application
    Filed: April 26, 2023
    Publication date: December 21, 2023
    Applicants: William Marsh Rice University, Carnegie Mellon University
    Inventors: Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin C. Sankaranarayanan, Ashok Veeraraghavan
  • Publication number: 20230360251
    Abstract: A device that measures a size of a user's face, referred to as face scaling, using a monocular camera. Depth is calculated from sparse feature points. A face mesh is used to improve the estimation accuracy. A processing pipeline detects face features by applying a face landmark detection algorithm to find the important face feature points such as the eyes, nose, and mouth. The processing pipeline estimates feature points depth using depth obtained through image defocus. The processing pipeline further scales the face using an estimated depth of the face features.
    Type: Application
    Filed: May 6, 2023
    Publication date: November 9, 2023
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
  • Patent number: 11676294
    Abstract: A method for a passive single-viewpoint 3D imaging system comprises capturing an image from a camera having one or more phase masks. The method further includes using a reconstruction algorithm, for estimation of a 3D or depth image.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: June 13, 2023
    Assignees: William Marsh Rice University, Carnegie Mellon University
    Inventors: Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin C. Sankaranarayanan, Ashok Veeraraghavan
  • Publication number: 20230069614
    Abstract: Images of a scene are received. The images represent viewpoints corresponding to the scene. A pixel map of the scene is computed based on the plurality of images. Multi-plane image (MPI) layers from the pixel map are extracted in real-time. The MPI layers are aggregated. The scene is rendered from a novel viewpoint based on the aggregated MPI layers.
    Type: Application
    Filed: August 29, 2022
    Publication date: March 2, 2023
    Inventors: Numair Khalil Ullah Khan, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
  • Patent number: 11584237
    Abstract: Disclosed are a mobile Internet-based integrated vehicle energy replenishment system and method, and a storage medium.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: February 21, 2023
    Assignee: NIO CO., LTD.
    Inventors: Bin Li, Lihong Qin, Fei Shen, Xin Zhou, Jinxing Qiang, Jianxing Zhang, Xu He, Xiang Ma, Yicheng Wu, Xiaobin Pan
  • Publication number: 20220375045
    Abstract: A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.
    Type: Application
    Filed: November 9, 2020
    Publication date: November 24, 2022
    Inventors: Yicheng Wu, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Jonathan T. Barron
  • Publication number: 20220227249
    Abstract: The invention relates to the technical field of battery charging and swap, and in particular to an off-line battery swap method, a battery charging and swap station, a vehicle with a battery to be swapped, and a readable storage medium. The invention is intended to solve the problem that battery swap cannot be performed at a battery charging and swap station when a vehicle is not connected to a network or a cloud server has a fault.
    Type: Application
    Filed: January 19, 2022
    Publication date: July 21, 2022
    Inventor: Yicheng WU
  • Patent number: 10983779
    Abstract: A system upgrade assessment method based on system parameter correlation coefficients is provided. The problem that the existing system upgrade assessment method cannot accurately assess an upgraded system is solved. For such a purpose, the system upgrade assessment method comprises the following steps: acquiring first data for a plurality of parameters before system upgrade (S110); acquiring second data for the plurality of parameters after the system upgrade (S120); calculating first correlation coefficients of the first data and second correlation coefficients of the second data (S130); calculating third correlation coefficients between the first data and the corresponding second data (S140); and determining, based on the magnitudes of the first correlation coefficients, the second correlation coefficients and the third correlation coefficients, whether the system upgrade succeeds (S150).
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: April 20, 2021
    Assignee: NIO CO., LTD.
    Inventors: Haitao Du, Yicheng Wu, Chongkui Jin
  • Publication number: 20200351454
    Abstract: A system for a wavefront imaging sensor with high resolution (WISH) comprises a spatial light modulator (SLM), a plurality of image sensors and a processor. The system further includes the SLM and a computational post-processing algorithm for recovering an incident wavefront with a high spatial resolution and a fine phase estimation. In addition, the image sensors work both in a visible electromagnetic (EM) spectrum and outside the visible EM spectrum.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 5, 2020
    Applicant: William Marsh Rice University
    Inventors: Yicheng Wu, Manoj Kumar Sharma, Ashok Veeraraghavan
  • Publication number: 20200349729
    Abstract: A method for a passive single-viewpoint 3D imaging system comprises capturing an image from a camera having one or more phase masks. The method further includes using a reconstruction algorithm, for estimation of a 3D or depth image.
    Type: Application
    Filed: May 1, 2020
    Publication date: November 5, 2020
    Applicants: William Marsh Rice University, Carnegie Mellon University
    Inventors: Yicheng Wu, Vivek Boominathan, Hauijin Chen, Aswin C. Sankaranarayanan, Ashok Veeraraghavan
  • Patent number: 10694123
    Abstract: A method for imaging objects includes illuminating an object with a light source of an imaging device, and receiving an illumination field reflected by the object. An aperture field that intercepts a pupil of the imaging device is an optical propagation of the illumination field at an aperture plane. The method includes receiving a portion of the aperture field onto a camera sensor, and receiving a sensor field of optical intensity. The method also includes iteratively centering the camera focus along the Fourier plane at different locations to produce a series of sensor fields and stitching together the sensor fields in the Fourier domain to generate an image. The method also includes determining a plurality of phase information for each sensor field in the series of sensor fields, applying the plurality of phase information to the image, receiving a plurality of illumination fields reflected by the object, and denoising the intensity of plurality of illumination fields using Fourier ptychography.
    Type: Grant
    Filed: July 14, 2018
    Date of Patent: June 23, 2020
    Assignees: Northwestern University, William Marsh Rice University
    Inventors: Oliver Strider Cossairt, Jason Holloway, Ashok Veeraraghavan, Manoj Kumar Sharma, Yicheng Wu